Sample records for driven technical metrics

  1. Metric-driven harm: an exploration of unintended consequences of performance measurement.

    PubMed

    Rambur, Betty; Vallett, Carol; Cohen, Judith A; Tarule, Jill Mattuck

    2013-11-01

    Performance measurement is an increasingly common element of the US health care system. Typically a proxy for high quality outcomes, there has been little systematic investigation of the potential negative unintended consequences of performance metrics, including metric-driven harm. This case study details an incidence of post-surgical metric-driven harm and offers Smith's 1995 work and a patient centered, context sensitive metric model for potential adoption by nurse researchers and clinicians. Implications for further research are discussed. © 2013.

  2. Application of Bounded Linear Stability Analysis Method for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics-driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a second order system that represents a pitch attitude control of a generic transport aircraft. The analysis shows that the system with the metrics-conforming variable adaptive gain becomes more robust to unmodeled dynamics or time delay. The effect of analysis time-window for BLSA is also evaluated in order to meet the stability margin criteria.

  3. Adjustment of Adaptive Gain with Bounded Linear Stability Analysis to Improve Time-Delay Margin for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.

  4. Outcome-Driven Service Provider Performance under Conditions of Complexity and Uncertainty, Defense Acquisition in Transition, Volume 2, 13-14 May 2009.

    DTIC Science & Technology

    2009-04-22

    bandwidth and response times. Forrester Research uses the analogy of a consumer using an automated teller machine to explain how technical SLAs should...be crafted. “It’s not enough that you put your card and Personal Identification Number (PIN) [in the machine ] and request to withdraw cash...IRR) Net Present Value (NPV) Other Relevant Metrics Payback Period Cost/Benefit Ratio Cost, Economic, and/or Financial Analysis Yes Yes Yes

  5. Metric Education; A Position Paper for Vocational, Technical and Adult Education.

    ERIC Educational Resources Information Center

    Cooper, Gloria S.; And Others

    Part of an Office of Education three-year project on metric education, the position paper is intended to alert and prepare teachers, curriculum developers, and administrators in vocational, technical, and adult education to the change over to the metric system. The five chapters cover issues in metric education, what the metric system is all…

  6. Metric Supplement to Technical Drawing.

    ERIC Educational Resources Information Center

    Henschel, Mark

    This manual is intended for use in training persons whose vocations involve technical drawing to use the metric system of measurement. It could be used in a short course designed for that purpose or for individual study. The manual begins with a brief discussion of the rationale for conversion to the metric system. It then provides a…

  7. Metric Conversion in the Construction Industries--Technical Issues and Status.

    ERIC Educational Resources Information Center

    Milton, Hans J.; Berry, Sandra A.

    This Special Publication was prepared at the request of the Metric Symposium Planning Committee of the National Institute of Building Sciences (NIBS). It is intended to provide information on technical issues and status of metric conversion in the United States construction industries. It was made available to attendees at the NIBS Symposium on…

  8. Measure for Measure: A Guide to Metrication for Workshop Crafts and Technical Studies.

    ERIC Educational Resources Information Center

    Schools Council, London (England).

    This booklet is designed to help teachers of the industrial arts in Great Britain during the changeover to metric units which is due to be substantially completed during the period 1970-1975. General suggestions are given for adapting equipment in metalwork and engineering and woodwork and technical drawing by adding some metric equipment…

  9. Flight Validation of a Metrics Driven L(sub 1) Adaptive Control

    NASA Technical Reports Server (NTRS)

    Dobrokhodov, Vladimir; Kitsios, Ioannis; Kaminer, Isaac; Jones, Kevin D.; Xargay, Enric; Hovakimyan, Naira; Cao, Chengyu; Lizarraga, Mariano I.; Gregory, Irene M.

    2008-01-01

    The paper addresses initial steps involved in the development and flight implementation of new metrics driven L1 adaptive flight control system. The work concentrates on (i) definition of appropriate control driven metrics that account for the control surface failures; (ii) tailoring recently developed L1 adaptive controller to the design of adaptive flight control systems that explicitly address these metrics in the presence of control surface failures and dynamic changes under adverse flight conditions; (iii) development of a flight control system for implementation of the resulting algorithms onboard of small UAV; and (iv) conducting a comprehensive flight test program that demonstrates performance of the developed adaptive control algorithms in the presence of failures. As the initial milestone the paper concentrates on the adaptive flight system setup and initial efforts addressing the ability of a commercial off-the-shelf AP with and without adaptive augmentation to recover from control surface failures.

  10. A defect-driven diagnostic method for machine tool spindles

    PubMed Central

    Vogl, Gregory W.; Donmez, M. Alkan

    2016-01-01

    Simple vibration-based metrics are, in many cases, insufficient to diagnose machine tool spindle condition. These metrics couple defect-based motion with spindle dynamics; diagnostics should be defect-driven. A new method and spindle condition estimation device (SCED) were developed to acquire data and to separate system dynamics from defect geometry. Based on this method, a spindle condition metric relying only on defect geometry is proposed. Application of the SCED on various milling and turning spindles shows that the new approach is robust for diagnosing the machine tool spindle condition. PMID:28065985

  11. The SI Metric System and Practical Applications.

    ERIC Educational Resources Information Center

    Carney, Richard W.

    Intended for use in the technical program of a technical institute or community college, this student manual is designed to provide background in the metric system contributing to employability. Nine units are presented with objectives stated for each unit followed by questions or exercises. (Printed answers are supplied when necessary.) Unit 1…

  12. Energy retrofit of an office building by substitution of the generation system: performance evaluation via dynamic simulation versus current technical standards

    NASA Astrophysics Data System (ADS)

    Testi, D.; Schito, E.; Menchetti, E.; Grassi, W.

    2014-11-01

    Constructions built in Italy before 1945 (about 30% of the total built stock) feature low energy efficiency. Retrofit actions in this field can lead to valuable energetic and economic savings. In this work, we ran a dynamic simulation of a historical building of the University of Pisa during the heating season. We firstly evaluated the energy requirements of the building and the performance of the existing natural gas boiler, validated with past billings of natural gas. We also verified the energetic savings obtainable by the substitution of the boiler with an air-to-water electrically-driven modulating heat pump, simulated through a cycle-based model, evaluating the main economic metrics. The cycle-based model of the heat pump, validated with manufacturers' data available only at specified temperature and load conditions, can provide more accurate results than the simplified models adopted by current technical standards, thus increasing the effectiveness of energy audits.

  13. Gaining Control and Predictability of Software-Intensive Systems Development and Sustainment

    DTIC Science & Technology

    2015-02-04

    implementation of the baselines, audits , and technical reviews within an overarching systems engineering process (SEP; Defense Acquisition University...warfighters’ needs. This management and metrics effort supplements and supports the system’s technical development through the baselines, audits and...other areas that could be researched and added into the nine-tier model. Areas including software metrics, quality assurance , software-oriented

  14. Assessing technical performance in differential gene expression experiments with external spike-in RNA control ratio mixtures.

    PubMed

    Munro, Sarah A; Lund, Steven P; Pine, P Scott; Binder, Hans; Clevert, Djork-Arné; Conesa, Ana; Dopazo, Joaquin; Fasold, Mario; Hochreiter, Sepp; Hong, Huixiao; Jafari, Nadereh; Kreil, David P; Łabaj, Paweł P; Li, Sheng; Liao, Yang; Lin, Simon M; Meehan, Joseph; Mason, Christopher E; Santoyo-Lopez, Javier; Setterquist, Robert A; Shi, Leming; Shi, Wei; Smyth, Gordon K; Stralis-Pavese, Nancy; Su, Zhenqiang; Tong, Weida; Wang, Charles; Wang, Jian; Xu, Joshua; Ye, Zhan; Yang, Yong; Yu, Ying; Salit, Marc

    2014-09-25

    There is a critical need for standard approaches to assess, report and compare the technical performance of genome-scale differential gene expression experiments. Here we assess technical performance with a proposed standard 'dashboard' of metrics derived from analysis of external spike-in RNA control ratio mixtures. These control ratio mixtures with defined abundance ratios enable assessment of diagnostic performance of differentially expressed transcript lists, limit of detection of ratio (LODR) estimates and expression ratio variability and measurement bias. The performance metrics suite is applicable to analysis of a typical experiment, and here we also apply these metrics to evaluate technical performance among laboratories. An interlaboratory study using identical samples shared among 12 laboratories with three different measurement processes demonstrates generally consistent diagnostic power across 11 laboratories. Ratio measurement variability and bias are also comparable among laboratories for the same measurement process. We observe different biases for measurement processes using different mRNA-enrichment protocols.

  15. A case study of lean drug discovery: from project driven research to innovation studios and process factories.

    PubMed

    Ullman, Fredrik; Boutellier, Roman

    2008-06-01

    At the operational level, the number of investigational new drugs or candidates for development per dollar spent in research, and the number of patents per year are highly integrated measures of productivity and, thus, difficult to influence at the individual or lab level. Hence, different metrics are needed to assess and thereby improve productivity in research at the individual and group level. This review centers on a case study, including over 70 interviews, in a research department of a global pharmaceutical company as well as over 40 interviews in contract research organizations (CROs) and 5 in small biotechnology firms. For each lab, its value adding process was plotted according to lean six sigma methods and appropriate metrics were defined. We suggest a strong focus on short feedback loops in research as an indicator for efficiency. Our results reveal two categories of activities: creativity-driven ones and process-driven ones, both discussed with respect to the methodology used. The fundamental differences in nature of these activities require different sets of metrics to assess them. On the basis of these metrics, different organizational forms can be derived to achieve a lean research structure: innovation studios and process factories, respectively.

  16. Ergodicity in natural earthquake fault networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiampo, K. F.; Rundle, J. B.; Holliday, J.

    2007-06-15

    Numerical simulations have shown that certain driven nonlinear systems can be characterized by mean-field statistical properties often associated with ergodic dynamics [C. D. Ferguson, W. Klein, and J. B. Rundle, Phys. Rev. E 60, 1359 (1999); D. Egolf, Science 287, 101 (2000)]. These driven mean-field threshold systems feature long-range interactions and can be treated as equilibriumlike systems with statistically stationary dynamics over long time intervals. Recently the equilibrium property of ergodicity was identified in an earthquake fault system, a natural driven threshold system, by means of the Thirumalai-Mountain (TM) fluctuation metric developed in the study of diffusive systems [K. F.more » Tiampo, J. B. Rundle, W. Klein, J. S. Sa Martins, and C. D. Ferguson, Phys. Rev. Lett. 91, 238501 (2003)]. We analyze the seismicity of three naturally occurring earthquake fault networks from a variety of tectonic settings in an attempt to investigate the range of applicability of effective ergodicity, using the TM metric and other related statistics. Results suggest that, once variations in the catalog data resulting from technical and network issues are accounted for, all of these natural earthquake systems display stationary periods of metastable equilibrium and effective ergodicity that are disrupted by large events. We conclude that a constant rate of events is an important prerequisite for these periods of punctuated ergodicity and that, while the level of temporal variability in the spatial statistics is the controlling factor in the ergodic behavior of seismic networks, no single statistic is sufficient to ensure quantification of ergodicity. Ergodicity in this application not only requires that the system be stationary for these networks at the applicable spatial and temporal scales, but also implies that they are in a state of metastable equilibrium, one in which the ensemble averages can be substituted for temporal averages in studying their spatiotemporal evolution.« less

  17. Valuation Diagramming and Accounting of Transactive Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makhmalbaf, Atefe; Hammerstrom, Donald J.; Huang, Qiuhua

    Transactive energy (TE) systems support both economic and technical objectives of a power system including efficiency and reliability. TE systems utilize value-driven mechanisms to coordinate and balance responsive supply and demand in the power system. Economic performance of TE systems cannot be assessed without estimating their value. Estimating the potential value of transactive energy systems requires a systematic valuation methodology that can capture value exchanges among different stakeholders (i.e., actors) and ultimately estimate impact of one TE design and compare it against another one. Such a methodology can help decision makers choose the alternative that results in preferred outcomes. Thismore » paper presents a valuation methodology developed to assess value of TE systems. A TE use-case example is discussed, and metrics identified in the valuation process are quantified using a TE simulation program.« less

  18. Topographic metric predictions of soil organic carbon in Iowa fields

    USDA-ARS?s Scientific Manuscript database

    Topography is one of the key factors affecting soil organic carbon (SOC) redistribution (erosion or deposition) because it influences the gravity-driven movement of soil by water flow and tillage operations. In this study, we examined impacts of sixteen topographic metrics derived from Light Detecti...

  19. Develop metrics of tire debris on Texas highways : technical report.

    DOT National Transportation Integrated Search

    2017-05-01

    This research effort estimated the amount, characteristics, costs, and safety implications of tire debris on Texas highways. The metrics developed by this research are based on several sources of data, including a statewide survey of debris removal p...

  20. The use of player physical and technical skill match activity profiles to predict position in the Australian Football League draft.

    PubMed

    Woods, Carl T; Veale, James P; Collier, Neil; Robertson, Sam

    2017-02-01

    This study investigated the extent to which position in the Australian Football League (AFL) national draft is associated with individual game performance metrics. Physical/technical skill performance metrics were collated from all participants in the 2014 national under 18 (U18) championships (18 games) drafted into the AFL (n = 65; 17.8 ± 0.5 y); 232 observations. Players were subdivided into draft position (ranked 1-65) and then draft round (1-4). Here, earlier draft selection (i.e., closer to 1) reflects a more desirable player. Microtechnology and a commercial provider facilitated the quantification of individual game performance metrics (n = 16). Linear mixed models were fitted to data, modelling the extent to which draft position was associated with these metrics. Draft position in the first/second round was negatively associated with "contested possessions" and "contested marks", respectively. Physical performance metrics were positively associated with draft position in these rounds. Correlations weakened for the third/fourth rounds. Contested possessions/marks were associated with an earlier draft selection. Physical performance metrics were associated with a later draft selection. Recruiters change the type of U18 player they draft as the selection pool reduces. juniors with contested skill appear prioritised.

  1. More than Just Test Scores: Leading for Improvement with an Alternative Community-Driven Accountability Metric

    ERIC Educational Resources Information Center

    Spain, Angeline; McMahon, Kelly

    2016-01-01

    In this case, Sharon Rowley, a veteran principal, volunteers to participate in a new community-driven accountability initiative and encounters dilemmas about what it means to be a "data-driven" instructional leader. This case provides an opportunity for aspiring school leaders to explore and apply data-use theory to the work of leading…

  2. Systemic delay propagation in the US airport network

    PubMed Central

    Fleurquin, Pablo; Ramasco, José J.; Eguiluz, Victor M.

    2013-01-01

    Technologically driven transport systems are characterized by a networked structure connecting operation centers and by a dynamics ruled by pre-established schedules. Schedules impose serious constraints on the timing of the operations, condition the allocation of resources and define a baseline to assess system performance. Here we study the performance of an air transportation system in terms of delays. Technical, operational or meteorological issues affecting some flights give rise to primary delays. When operations continue, such delays can propagate, magnify and eventually involve a significant part of the network. We define metrics able to quantify the level of network congestion and introduce a model that reproduces the delay propagation patterns observed in the U.S. performance data. Our results indicate that there is a non-negligible risk of systemic instability even under normal operating conditions. We also identify passenger and crew connectivity as the most relevant internal factor contributing to delay spreading. PMID:23362459

  3. Proficiency performance benchmarks for removal of simulated brain tumors using a virtual reality simulator NeuroTouch.

    PubMed

    AlZhrani, Gmaan; Alotaibi, Fahad; Azarnoush, Hamed; Winkler-Schwartz, Alexander; Sabbagh, Abdulrahman; Bajunaid, Khalid; Lajoie, Susanne P; Del Maestro, Rolando F

    2015-01-01

    Assessment of neurosurgical technical skills involved in the resection of cerebral tumors in operative environments is complex. Educators emphasize the need to develop and use objective and meaningful assessment tools that are reliable and valid for assessing trainees' progress in acquiring surgical skills. The purpose of this study was to develop proficiency performance benchmarks for a newly proposed set of objective measures (metrics) of neurosurgical technical skills performance during simulated brain tumor resection using a new virtual reality simulator (NeuroTouch). Each participant performed the resection of 18 simulated brain tumors of different complexity using the NeuroTouch platform. Surgical performance was computed using Tier 1 and Tier 2 metrics derived from NeuroTouch simulator data consisting of (1) safety metrics, including (a) volume of surrounding simulated normal brain tissue removed, (b) sum of forces utilized, and (c) maximum force applied during tumor resection; (2) quality of operation metric, which involved the percentage of tumor removed; and (3) efficiency metrics, including (a) instrument total tip path lengths and (b) frequency of pedal activation. All studies were conducted in the Neurosurgical Simulation Research Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada. A total of 33 participants were recruited, including 17 experts (board-certified neurosurgeons) and 16 novices (7 senior and 9 junior neurosurgery residents). The results demonstrated that "expert" neurosurgeons resected less surrounding simulated normal brain tissue and less tumor tissue than residents. These data are consistent with the concept that "experts" focused more on safety of the surgical procedure compared with novices. By analyzing experts' neurosurgical technical skills performance on these different metrics, we were able to establish benchmarks for goal proficiency performance training of neurosurgery residents. This study furthers our understanding of expert neurosurgical performance during the resection of simulated virtual reality tumors and provides neurosurgical trainees with predefined proficiency performance benchmarks designed to maximize the learning of specific surgical technical skills. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  4. 16 CFR 1209.2 - Definitions and measurements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... with the technical requirements of this standard, the figures are given in the metric system of measurement. The inch-pound system approximations of these figures are provided in parentheses for convenience... numerical quantities are given without tolerances in both the metric and inch-pound system of measurements...

  5. 16 CFR 1209.2 - Definitions and measurements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... with the technical requirements of this standard, the figures are given in the metric system of measurement. The inch-pound system approximations of these figures are provided in parentheses for convenience... numerical quantities are given without tolerances in both the metric and inch-pound system of measurements...

  6. 16 CFR § 1209.2 - Definitions and measurements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... with the technical requirements of this standard, the figures are given in the metric system of measurement. The inch-pound system approximations of these figures are provided in parentheses for convenience... numerical quantities are given without tolerances in both the metric and inch-pound system of measurements...

  7. New Books for Industrial Educators

    ERIC Educational Resources Information Center

    School Shop, 1975

    1975-01-01

    The most recent book releases in the field of industrial-technical education are listed alphabetically under: automotive/power mechanics; building trades; drafting; electricity/electronics; graphic arts, industrial arts, vocational, technical and career education; industrial mathematics; machine shop/metalworking; metrics; radio/television;…

  8. Development and Implementation of a Design Metric for Systems Containing Long-Term Fluid Loops

    NASA Technical Reports Server (NTRS)

    Steele, John W.

    2016-01-01

    John Steele, a chemist and technical fellow from United Technologies Corporation, provided a water quality module to assist engineers and scientists with a metric tool to evaluate risks associated with the design of space systems with fluid loops. This design metric is a methodical, quantitative, lessons-learned based means to evaluate the robustness of a long-term fluid loop system design. The tool was developed by a cross-section of engineering disciplines who had decades of experience and problem resolution.

  9. Ozone (O3) Standards - Other Technical Documents from the Review Completed in 2015

    EPA Pesticide Factsheets

    These memoranda were each sent in to the Ozone NAAQS Review Docket, EPA-HQ-OAR-2008-0699, after the proposed rule was published. They present technical data on the methods, monitoring stations, and metrics used to estimate ozone concentrations.

  10. NASA Environmentally Responsible Aviation's Highly-Loaded Front Block Compressor Demonstration

    NASA Technical Reports Server (NTRS)

    Celestina, Mark

    2017-01-01

    The ERA project was created in 2009 as part of NASAs Aeronautics Research Mission Directorates (ARMD) Integrated Systems Aviation Program (IASP). The purpose of the ERA project was to explore and document the feasibility, benefit, and technical risk of vehicles concepts and enabling technologies to reduce aviations impact on the environment. The metrics for this technology is given in Figure 1 with the N+2 metrics highlighted in green. It is anticipated that the United States air transportation system will continue to expand significantly over the next few decades thus adversely impacting the environment unless new technology is incorporated to simultaneously reduce nitrous oxides (NOx), noise and fuel consumption. In order to achieve the overall goals and meet the technology insertion challenges, these goals were divided into technical challenges that were to be achieved during the execution of the ERA project. Technical challenges were accomplished through test campaigns conducted by Integrated Technology Demonstration (ITDs). ERAs technical performance period ended in 2015.

  11. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Technical Performance Assessment

    PubMed Central

    2017-01-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers (QIBs) to measure changes in these features. Critical to the performance of a QIB in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method and metrics used to assess a QIB for clinical use. It is therefore, difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America (RSNA) and the Quantitative Imaging Biomarker Alliance (QIBA) with technical, radiological and statistical experts developed a set of technical performance analysis methods, metrics and study designs that provide terminology, metrics and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of QIB performance studies so that results from multiple studies can be compared, contrasted or combined. PMID:24919831

  12. Identifying Seizure Onset Zone From the Causal Connectivity Inferred Using Directed Information

    NASA Astrophysics Data System (ADS)

    Malladi, Rakesh; Kalamangalam, Giridhar; Tandon, Nitin; Aazhang, Behnaam

    2016-10-01

    In this paper, we developed a model-based and a data-driven estimator for directed information (DI) to infer the causal connectivity graph between electrocorticographic (ECoG) signals recorded from brain and to identify the seizure onset zone (SOZ) in epileptic patients. Directed information, an information theoretic quantity, is a general metric to infer causal connectivity between time-series and is not restricted to a particular class of models unlike the popular metrics based on Granger causality or transfer entropy. The proposed estimators are shown to be almost surely convergent. Causal connectivity between ECoG electrodes in five epileptic patients is inferred using the proposed DI estimators, after validating their performance on simulated data. We then proposed a model-based and a data-driven SOZ identification algorithm to identify SOZ from the causal connectivity inferred using model-based and data-driven DI estimators respectively. The data-driven SOZ identification outperforms the model-based SOZ identification algorithm when benchmarked against visual analysis by neurologist, the current clinical gold standard. The causal connectivity analysis presented here is the first step towards developing novel non-surgical treatments for epilepsy.

  13. Software metrics: The key to quality software on the NCC project

    NASA Technical Reports Server (NTRS)

    Burns, Patricia J.

    1993-01-01

    Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.

  14. Evaluation of image deblurring methods via a classification metric

    NASA Astrophysics Data System (ADS)

    Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo

    2012-09-01

    The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.

  15. Virtual reality simulator training for laparoscopic colectomy: what metrics have construct validity?

    PubMed

    Shanmugan, Skandan; Leblanc, Fabien; Senagore, Anthony J; Ellis, C Neal; Stein, Sharon L; Khan, Sadaf; Delaney, Conor P; Champagne, Bradley J

    2014-02-01

    Virtual reality simulation for laparoscopic colectomy has been used for training of surgical residents and has been considered as a model for technical skills assessment of board-eligible colorectal surgeons. However, construct validity (the ability to distinguish between skill levels) must be confirmed before widespread implementation. This study was designed to specifically determine which metrics for laparoscopic sigmoid colectomy have evidence of construct validity. General surgeons that had performed fewer than 30 laparoscopic colon resections and laparoscopic colorectal experts (>200 laparoscopic colon resections) performed laparoscopic sigmoid colectomy on the LAP Mentor model. All participants received a 15-minute instructional warm-up and had never used the simulator before the study. Performance was then compared between each group for 21 metrics (procedural, 14; intraoperative errors, 7) to determine specifically which measurements demonstrate construct validity. Performance was compared with the Mann-Whitney U-test (p < 0.05 was significant). Fifty-three surgeons; 29 general surgeons, and 24 colorectal surgeons enrolled in the study. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 of 14 procedural metrics by distinguishing levels of surgical experience (p < 0.05). The most discriminatory procedural metrics (p < 0.01) favoring experts were reduced instrument path length, accuracy of the peritoneal/medial mobilization, and dissection of the inferior mesenteric artery. Intraoperative errors were not discriminatory for most metrics and favored general surgeons for colonic wall injury (general surgeons, 0.7; colorectal surgeons, 3.5; p = 0.045). Individual variability within the general surgeon and colorectal surgeon groups was not accounted for. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 procedure-specific metrics. However, using virtual reality simulator metrics to detect intraoperative errors did not discriminate between groups. If the virtual reality simulator continues to be used for the technical assessment of trainees and board-eligible surgeons, the evaluation of performance should be limited to procedural metrics.

  16. Water Network Tool for Resilience v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-12-09

    WNTR is a python package designed to simulate and analyze resilience of water distribution networks. The software includes: - Pressure driven and demand driven hydraulic simulation - Water quality simulation to track concentration, trace, and water age - Conditional controls to simulate power outages - Models to simulate pipe breaks - A wide range of resilience metrics - Analysis and visualization tools

  17. Threat driven modeling framework using petri nets for e-learning system.

    PubMed

    Khamparia, Aditya; Pandey, Babita

    2016-01-01

    Vulnerabilities at various levels are main cause of security risks in e-learning system. This paper presents a modified threat driven modeling framework, to identify the threats after risk assessment which requires mitigation and how to mitigate those threats. To model those threat mitigations aspects oriented stochastic petri nets are used. This paper included security metrics based on vulnerabilities present in e-learning system. The Common Vulnerability Scoring System designed to provide a normalized method for rating vulnerabilities which will be used as basis in metric definitions and calculations. A case study has been also proposed which shows the need and feasibility of using aspect oriented stochastic petri net models for threat modeling which improves reliability, consistency and robustness of the e-learning system.

  18. Federal Standardization Manual

    DTIC Science & Technology

    1994-01-01

    susceptible to categorizing in the Federal Supply Classification system . Examples are PACK (packaging, packing, preservation and transportability) and... system . This involves a technical review of supply items to identify duplicating or overlapping items. It leads to a reduction in a number of similar...firms engaged in producing, distrib- uting and supporting such products. Metrication. Any act tending to increase the use of the metric system (SI

  19. Comparison of Collection Methods for Fecal Samples in Microbiome Studies

    PubMed Central

    Vogtmann, Emily; Chen, Jun; Amir, Amnon; Shi, Jianxin; Abnet, Christian C.; Nelson, Heidi; Knight, Rob; Chia, Nicholas; Sinha, Rashmi

    2017-01-01

    Prospective cohort studies are needed to assess the relationship between the fecal microbiome and human health and disease. To evaluate fecal collection methods, we determined technical reproducibility, stability at ambient temperature, and accuracy of 5 fecal collection methods (no additive, 95% ethanol, RNAlater Stabilization Solution, fecal occult blood test cards, and fecal immunochemical test tubes). Fifty-two healthy volunteers provided fecal samples at the Mayo Clinic in Rochester, Minnesota, in 2014. One set from each sample collection method was frozen immediately, and a second set was incubated at room temperature for 96 hours and then frozen. Intraclass correlation coefficients (ICCs) were calculated for the relative abundance of 3 phyla, 2 alpha diversity metrics, and 4 beta diversity metrics. Technical reproducibility was high, with ICCs for duplicate fecal samples between 0.64 and 1.00. Stability for most methods was generally high, although the ICCs were below 0.60 for 95% ethanol in metrics that were more sensitive to relative abundance. When compared with fecal samples that were frozen immediately, the ICCs were below 0.60 for the metrics that were sensitive to relative abundance; however, the remaining 2 alpha diversity and 3 beta diversity metrics were all relatively accurate, with ICCs above 0.60. In conclusion, all fecal sample collection methods appear relatively reproducible, stable, and accurate. Future studies could use these collection methods for microbiome analyses. PMID:27986704

  20. Performance Metrics for Liquid Chromatography-Tandem Mass Spectrometry Systems in Proteomics Analyses*

    PubMed Central

    Rudnick, Paul A.; Clauser, Karl R.; Kilpatrick, Lisa E.; Tchekhovskoi, Dmitrii V.; Neta, Pedatsur; Blonder, Nikša; Billheimer, Dean D.; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Ham, Amy-Joan L.; Jaffe, Jacob D.; Kinsinger, Christopher R.; Mesri, Mehdi; Neubert, Thomas A.; Schilling, Birgit; Tabb, David L.; Tegeler, Tony J.; Vega-Montoto, Lorenzo; Variyath, Asokan Mulayath; Wang, Mu; Wang, Pei; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Paulovich, Amanda G.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Tempst, Paul; Liebler, Daniel C.; Stein, Stephen E.

    2010-01-01

    A major unmet need in LC-MS/MS-based proteomics analyses is a set of tools for quantitative assessment of system performance and evaluation of technical variability. Here we describe 46 system performance metrics for monitoring chromatographic performance, electrospray source stability, MS1 and MS2 signals, dynamic sampling of ions for MS/MS, and peptide identification. Applied to data sets from replicate LC-MS/MS analyses, these metrics displayed consistent, reasonable responses to controlled perturbations. The metrics typically displayed variations less than 10% and thus can reveal even subtle differences in performance of system components. Analyses of data from interlaboratory studies conducted under a common standard operating procedure identified outlier data and provided clues to specific causes. Moreover, interlaboratory variation reflected by the metrics indicates which system components vary the most between laboratories. Application of these metrics enables rational, quantitative quality assessment for proteomics and other LC-MS/MS analytical applications. PMID:19837981

  1. Collaborative Project: The problem of bias in defining uncertainty in computationally enabled strategies for data-driven climate model development. Final Technical Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huerta, Gabriel

    The objective of the project is to develop strategies for better representing scientific sensibilities within statistical measures of model skill that then can be used within a Bayesian statistical framework for data-driven climate model development and improved measures of model scientific uncertainty. One of the thorny issues in model evaluation is quantifying the effect of biases on climate projections. While any bias is not desirable, only those biases that affect feedbacks affect scatter in climate projections. The effort at the University of Texas is to analyze previously calculated ensembles of CAM3.1 with perturbed parameters to discover how biases affect projectionsmore » of global warming. The hypothesis is that compensating errors in the control model can be identified by their effect on a combination of processes and that developing metrics that are sensitive to dependencies among state variables would provide a way to select version of climate models that may reduce scatter in climate projections. Gabriel Huerta at the University of New Mexico is responsible for developing statistical methods for evaluating these field dependencies. The UT effort will incorporate these developments into MECS, which is a set of python scripts being developed at the University of Texas for managing the workflow associated with data-driven climate model development over HPC resources. This report reflects the main activities at the University of New Mexico where the PI (Huerta) and the Postdocs (Nosedal, Hattab and Karki) worked on the project.« less

  2. David Malament and the Conventionality of Simultaneity: A Reply

    NASA Astrophysics Data System (ADS)

    Grünbaum, Adolf

    2010-10-01

    In 1977, David Malament proved the valuable technical result that the simultaneity relation of standard synchrony ɛ=1/2 with respect to an inertial observer O is uniquely definable in terms of the relation κ of causal connectibility. And he claimed that this definability undermines my own version of the conventionality of metrical simultaneity within an inertial frame. But Malament’s proof depends on the imposition of several supposedly “innocuous” constraints on any candidate for the simultaneity relation relative to O. Relying on Allen I. Janis’s 1983 challenge to one of these constraints, I argue that Malament’s technical result did not undermine my philosophical construal of the ontological status of relative metrical simultaneity. Furthermore, I show that (a) Michael Friedman’s peremptorily substantivalist critique of my conception, which Malament endorses, is ill-founded, and (b) if Malament had succeeded in discrediting my own conventionalist version of metrical simultaneity, he would likewise have invalidated Einstein’s pioneering version of it.

  3. Quantitative imaging biomarkers: a review of statistical methods for technical performance assessment.

    PubMed

    Raunig, David L; McShane, Lisa M; Pennello, Gene; Gatsonis, Constantine; Carson, Paul L; Voyvodic, James T; Wahl, Richard L; Kurland, Brenda F; Schwarz, Adam J; Gönen, Mithat; Zahlmann, Gudrun; Kondratovich, Marina V; O'Donnell, Kevin; Petrick, Nicholas; Cole, Patricia E; Garra, Brian; Sullivan, Daniel C

    2015-02-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of technical performance analysis methods, metrics, and study designs that provide terminology, metrics, and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of quantitative imaging biomarker performance studies so that results from multiple studies can be compared, contrasted, or combined. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  4. Cerebral pressure–flow relationship in lowlanders and natives at high altitude

    PubMed Central

    Smirl, Jonathan D; Lucas, Samuel J E; Lewis, Nia C S; duManior, Gregory R; Smith, Kurt J; Bakker, Akke; Basnyat, Aperna S; Ainslie, Philip N

    2014-01-01

    We investigated if dynamic cerebral pressure–flow relationships in lowlanders are altered at high altitude (HA), differ in HA natives and after return to sea level (SL). Lowlanders were tested at SL (n=16), arrival to 5,050 m, after 2-week acclimatization (with and without end-tidal PO2 normalization), and upon SL return. High-altitude natives (n=16) were tested at 5,050 m. Testing sessions involved resting spontaneous and driven (squat–stand maneuvers at very low (VLF, 0.05 Hz) and low (LF, 0.10 Hz) frequencies) measures to maximize blood pressure (BP) variability and improve assessment of the pressure–flow relationship using transfer function analysis (TFA). Blood flow velocity was assessed in the middle (MCAv) and posterior (PCAv) cerebral arteries. Spontaneous VLF and LF phases were reduced and coherence was elevated with acclimatization to HA (P<0.05), indicating impaired pressure–flow coupling. However, when BP was driven, both the frequency- and time-domain metrics were unaltered and comparable with HA natives. Acute mountain sickness was unrelated to TFA metrics. In conclusion, the driven cerebral pressure–flow relationship (in both frequency and time domains) is unaltered at 5,050 m in lowlanders and HA natives. Our findings indicate that spontaneous changes in TFA metrics do not necessarily reflect physiologically important alterations in the capacity of the brain to regulate BP. PMID:24169852

  5. Cerebral pressure-flow relationship in lowlanders and natives at high altitude.

    PubMed

    Smirl, Jonathan D; Lucas, Samuel J E; Lewis, Nia C S; duManoir, Gregory R; Dumanior, Gregory R; Smith, Kurt J; Bakker, Akke; Basnyat, Aperna S; Ainslie, Philip N

    2014-02-01

    We investigated if dynamic cerebral pressure-flow relationships in lowlanders are altered at high altitude (HA), differ in HA natives and after return to sea level (SL). Lowlanders were tested at SL (n=16), arrival to 5,050 m, after 2-week acclimatization (with and without end-tidal PO2 normalization), and upon SL return. High-altitude natives (n=16) were tested at 5,050 m. Testing sessions involved resting spontaneous and driven (squat-stand maneuvers at very low (VLF, 0.05 Hz) and low (LF, 0.10 Hz) frequencies) measures to maximize blood pressure (BP) variability and improve assessment of the pressure-flow relationship using transfer function analysis (TFA). Blood flow velocity was assessed in the middle (MCAv) and posterior (PCAv) cerebral arteries. Spontaneous VLF and LF phases were reduced and coherence was elevated with acclimatization to HA (P<0.05), indicating impaired pressure-flow coupling. However, when BP was driven, both the frequency- and time-domain metrics were unaltered and comparable with HA natives. Acute mountain sickness was unrelated to TFA metrics. In conclusion, the driven cerebral pressure-flow relationship (in both frequency and time domains) is unaltered at 5,050 m in lowlanders and HA natives. Our findings indicate that spontaneous changes in TFA metrics do not necessarily reflect physiologically important alterations in the capacity of the brain to regulate BP.

  6. Face, content, and construct validity of four, inanimate training exercises using the da Vinci ® Si surgical system configured with Single-Site ™ instrumentation.

    PubMed

    Jarc, Anthony M; Curet, Myriam

    2015-08-01

    Validated training exercises are essential tools for surgeons as they develop technical skills to use robot-assisted minimally invasive surgical systems. The purpose of this study was to show face, content, and construct validity of four, inanimate training exercises using the da Vinci (®) Si surgical system configured with Single-Site (™) instrumentation. New (N = 21) and experienced (N = 6) surgeons participated in the study. New surgeons (11 Gynecology [GYN] and 10 General Surgery [GEN]) had not completed any da Vinci Single-Site cases but may have completed multiport cases using the da Vinci system. They participated in this study prior to attending a certification course focused on da Vinci Single-Site instrumentation. Experienced surgeons (5 GYN and 1 GEN) had completed at least 25 da Vinci Single-Site cases. The surgeons completed four inanimate training exercises and then rated them with a questionnaire. Raw metrics and overall normalized scores were computed using both video recordings and kinematic data collected from the surgical system. The experienced surgeons significantly outperformed new surgeons for many raw metrics and the overall normalized scores derived from video review (p < 0.05). Only one exercise did not achieve a significant difference between new and experienced surgeons (p = 0.08) when calculating an overall normalized score using both video and advanced metrics derived from kinematic data. Both new and experienced surgeons rated the training exercises as appearing, to train and measure technical skills used during da Vinci Single-Site surgery and actually testing the technical skills used during da Vinci Single-Site surgery. In summary, the four training exercises showed face, content, and construct validity. Improved overall scores could be developed using additional metrics not included in this study. The results suggest that the training exercises could be used in an overall training curriculum aimed at developing proficiency in technical skills for surgeons new to da Vinci Single-Site instrumentation.

  7. Adaptive Acquisitions: Maintaining Military Dominance By Managing Innovation

    DTIC Science & Technology

    2014-04-01

    for the relatively unknown disruptive technologies , even for the technical experts. For example, in the early years of rocket research Jerome Hunsaker...improve along existing performance metrics.19 Since disruptive technologies generally underperform along these old value metrics, customers tend to...since the actual value of the innovation is difficult, if not impossible, to determine a priori. In fact, most of the claimed potential disruptive

  8. An integrated, indicator framework for assessing large-scale variations and change in seasonal timing and phenology (Invited)

    NASA Astrophysics Data System (ADS)

    Betancourt, J. L.; Weltzin, J. F.

    2013-12-01

    As part of an effort to develop an Indicator System for the National Climate Assessment (NCA), the Seasonality and Phenology Indicators Technical Team (SPITT) proposed an integrated, continental-scale framework for understanding and tracking seasonal timing in physical and biological systems. The framework shares several metrics with the EPA's National Climate Change Indicators. The SPITT framework includes a comprehensive suite of national indicators to track conditions, anticipate vulnerabilities, and facilitate intervention or adaptation to the extent possible. Observed, modeled, and forecasted seasonal timing metrics can inform a wide spectrum of decisions on federal, state, and private lands in the U.S., and will be pivotal for international efforts to mitigation and adaptation. Humans use calendars both to understand the natural world and to plan their lives. Although the seasons are familiar concepts, we lack a comprehensive understanding of how variability arises in the timing of seasonal transitions in the atmosphere, and how variability and change translate and propagate through hydrological, ecological and human systems. For example, the contributions of greenhouse warming and natural variability to secular trends in seasonal timing are difficult to disentangle, including earlier spring transitions from winter (strong westerlies) to summer (weak easterlies) patterns of atmospheric circulation; shifts in annual phasing of daily temperature means and extremes; advanced timing of snow and ice melt and soil thaw at higher latitudes and elevations; and earlier start and longer duration of the growing and fire seasons. The SPITT framework aims to relate spatiotemporal variability in surface climate to (1) large-scale modes of natural climate variability and greenhouse gas-driven climatic change, and (2) spatiotemporal variability in hydrological, ecological and human responses and impacts. The hierarchical framework relies on ground and satellite observations, and includes metrics of surface climate seasonality, seasonality of snow and ice, land surface phenology, ecosystem disturbance seasonality, and organismal phenology. Recommended metrics met the following requirements: (a) easily measured by day-of-year, number of days, or in the case of species migrations, by the latitude of the observation on a given date; (b) are observed or can be calculated across a high density of locations in many different regions of the U.S.; and (c) unambiguously describe both spatial and temporal variability and trends in seasonal timing that are climatically driven. The SPITT framework is meant to provide climatic and strategic guidance for the growth and application of seasonal timing and phenological monitoring efforts. The hope is that additional national indicators based on observed phenology, or evidence-based algorithms calibrated with observational data, will evolve with sustained and broad-scale monitoring of climatically sensitive species and ecological processes.

  9. Final priority. Rehabilitation Training: Job-Driven Vocational Rehabilitation Technical Assistance Center. Final priority.

    PubMed

    2014-08-19

    The Assistant Secretary for Special Education and Rehabilitative Services announces a priority under the Rehabilitation Training program to establish a Job-Driven Vocational Rehabilitation Technical Assistance Center (JDVRTAC). The Assistant Secretary may use this priority for competitions in fiscal year (FY) 2014 and later years. We take this action to focus on training in an area of national need. Specifically, this priority responds to the Presidential Memorandum to Federal agencies directing them to take action to address job-driven training for the Nation's workers. The JDVRTAC will provide technical assistance (TA) to State vocational rehabilitation (VR) agencies to help them develop for individuals with disabilities training and employment opportunities that meet the needs of today's employers.

  10. Field- and Remote Sensing-based Structural Attributes Measured at Multiple Scales Influence the Relationship Between Nitrogen and Reflectance of Forest Canopies

    NASA Astrophysics Data System (ADS)

    Sullivan, F.; Ollinger, S. V.; Palace, M. W.; Ouimette, A.; Sanders-DeMott, R.; Lepine, L. C.

    2017-12-01

    The correlation between near-infrared reflectance and forest canopy nitrogen concentration has been demonstrated at varying scales using a range of optical sensors on airborne and satellite platforms. Although the mechanism underpinning the relationship is unclear, at its basis are biologically-driven functional relationships of multiple plant traits that affect canopy chemistry and structure. The link between near-infrared reflectance and canopy nitrogen has been hypothesized to be partially driven by covariation of canopy nitrogen with canopy structure. In this study, we used a combination of airborne LiDAR data and field measured leaf and canopy chemical and structural traits to explore interrelationships between canopy nitrogen, near-infrared reflectance, and canopy structure on plots at Bartlett Experimental Forest in the White Mountain National Forest, New Hampshire. Over each plot, we developed a 1-meter resolution canopy height profile and a 1-meter resolution canopy height model. From canopy height profiles and canopy height models, we calculated a set of metrics describing the plot-level variability, breadth, depth, and arrangement of LiDAR returns. This combination of metrics was used to describe both vertical and horizontal variation in structure. In addition, we developed and measured several field-based metrics of leaf and canopy structure at the plot scale by directly measuring the canopy or by weighting leaf-level metrics by species leaf area contribution. We assessed relationships between leaf and structural metrics, near-infrared reflectance and canopy nitrogen concentration using multiple linear regression and mixed effects modeling. Consistent with our hypothesis, we found moderately strong links between both near-infrared reflectance and canopy nitrogen concentration with LiDAR-derived structural metrics, and we additionally found that leaf-level metrics scaled to the plot level share an important role in canopy reflectance. We suggest that canopy structure has a governing role in canopy reflectance, reducing maximum potential reflectance as structural complexity increases, and therefore also influences the relationship between canopy nitrogen and NIR reflectance.

  11. Test and Evaluation Metrics of Crew Decision-Making And Aircraft Attitude and Energy State Awareness

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Ellis, Kyle K. E.; Stephens, Chad L.

    2013-01-01

    NASA has established a technical challenge, under the Aviation Safety Program, Vehicle Systems Safety Technologies project, to improve crew decision-making and response in complex situations. The specific objective of this challenge is to develop data and technologies which may increase a pilot's (crew's) ability to avoid, detect, and recover from adverse events that could otherwise result in accidents/incidents. Within this technical challenge, a cooperative industry-government research program has been established to develop innovative flight deck-based counter-measures that can improve the crew's ability to avoid, detect, mitigate, and recover from unsafe loss-of-aircraft state awareness - specifically, the loss of attitude awareness (i.e., Spatial Disorientation, SD) or the loss-of-energy state awareness (LESA). A critical component of this research is to develop specific and quantifiable metrics which identify decision-making and the decision-making influences during simulation and flight testing. This paper reviews existing metrics and methods for SD testing and criteria for establishing visual dominance. The development of Crew State Monitoring technologies - eye tracking and other psychophysiological - are also discussed as well as emerging new metrics for identifying channelized attention and excessive pilot workload, both of which have been shown to contribute to SD/LESA accidents or incidents.

  12. Optimal protocols for slowly driven quantum systems.

    PubMed

    Zulkowski, Patrick R; DeWeese, Michael R

    2015-09-01

    The design of efficient quantum information processing will rely on optimal nonequilibrium transitions of driven quantum systems. Building on a recently developed geometric framework for computing optimal protocols for classical systems driven in finite time, we construct a general framework for optimizing the average information entropy for driven quantum systems. Geodesics on the parameter manifold endowed with a positive semidefinite metric correspond to protocols that minimize the average information entropy production in finite time. We use this framework to explicitly compute the optimal entropy production for a simple two-state quantum system coupled to a heat bath of bosonic oscillators, which has applications to quantum annealing.

  13. Methodology to Calculate the ACE and HPQ Metrics Used in the Wave Energy Prize

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driscoll, Frederick R; Weber, Jochem W; Jenne, Dale S

    The U.S. Department of Energy's Wave Energy Prize Competition encouraged the development of innovative deep-water wave energy conversion technologies that at least doubled device performance above the 2014 state of the art. Because levelized cost of energy (LCOE) metrics are challenging to apply equitably to new technologies where significant uncertainty exists in design and operation, the prize technical team developed a reduced metric as proxy for LCOE, which provides an equitable comparison of low technology readiness level wave energy converter (WEC) concepts. The metric is called 'ACE' which is short for the ratio of the average climate capture width tomore » the characteristic capital expenditure. The methodology and application of the ACE metric used to evaluate the performance of the technologies that competed in the Wave Energy Prize are explained in this report.« less

  14. Center to Advance Palliative Care palliative care clinical care and customer satisfaction metrics consensus recommendations.

    PubMed

    Weissman, David E; Morrison, R Sean; Meier, Diane E

    2010-02-01

    Data collection and analysis are vital for strategic planning, quality improvement, and demonstration of palliative care program impact to hospital administrators, private funders and policymakers. Since 2000, the Center to Advance Palliative Care (CAPC) has provided technical assistance to hospitals, health systems and hospices working to start, sustain, and grow nonhospice palliative care programs. CAPC convened a consensus panel in 2008 to develop recommendations for specific clinical and customer metrics that programs should track. The panel agreed on four key domains of clinical metrics and two domains of customer metrics. Clinical metrics include: daily assessment of physical/psychological/spiritual symptoms by a symptom assessment tool; establishment of patient-centered goals of care; support to patient/family caregivers; and management of transitions across care sites. For customer metrics, consensus was reached on two domains that should be tracked to assess satisfaction: patient/family satisfaction, and referring clinician satisfaction. In an effort to ensure access to reliably high-quality palliative care data throughout the nation, hospital palliative care programs are encouraged to collect and report outcomes for each of the metric domains described here.

  15. Environmentally Responsible Aviation (ERA) Project - N+2 Advanced Vehicle Concepts Study and Conceptual Design of Subscale Test Vehicle (STV) Final Report

    NASA Technical Reports Server (NTRS)

    Bonet, John T.; Schellenger, Harvey G.; Rawdon, Blaine K.; Elmer, Kevin R.; Wakayama, Sean R.; Brown, Derrell L.; Guo, Yueping

    2011-01-01

    NASA has set demanding goals for technology developments to meet national needs to improve fuel efficiency concurrent with improving the environment to enable air transportation growth. A figure shows NASA's subsonic transport system metrics. The results of Boeing ERA N+2 Advanced Vehicle Concept Study show that the Blended Wing Body (BWB) vehicle, with ultra high bypass propulsion systems have the potential to meet the combined NASA ERA N+2 goals. This study had 3 main activities. 1) The development of an advanced vehicle concepts that can meet the NASA system level metrics. 2) Identification of key enabling technologies and the development of technology roadmaps and maturation plans. 3) The development of a subscale test vehicle that can demonstrate and mature the key enabling technologies needed to meet the NASA system level metrics. Technology maturation plans are presented and include key performance parameters and technical performance measures. The plans describe the risks that will be reduced with technology development and the expected progression of technical maturity.

  16. Metrication and AIHA.

    PubMed

    Burnett, R D

    1977-05-01

    AIHA supports a planned orderly national program for conversion to the metric system and will cooperate with other technical societies and organizations in implementing this voluntary conversion. The Association will use the International System of Units (SI) as modified by the Secretary of Commerce for use in the United States in all official publications, papers and documents. U.S. customary units can be presented in parentheses following the appropriate SI unit, when it is necessary for clarity.

  17. A Single Conjunction Risk Assessment Metric: the F-Value

    NASA Technical Reports Server (NTRS)

    Frigm, Ryan Clayton; Newman, Lauri K.

    2009-01-01

    The Conjunction Assessment Team at NASA Goddard Space Flight Center provides conjunction risk assessment for many NASA robotic missions. These risk assessments are based on several figures of merit, such as miss distance, probability of collision, and orbit determination solution quality. However, these individual metrics do not singly capture the overall risk associated with a conjunction, making it difficult for someone without this complete understanding to take action, such as an avoidance maneuver. The goal of this analysis is to introduce a single risk index metric that can easily convey the level of risk without all of the technical details. The proposed index is called the conjunction "F-value." This paper presents the concept of the F-value and the tuning of the metric for use in routine Conjunction Assessment operations.

  18. Viewpoint matters: objective performance metrics for surgeon endoscope control during robot-assisted surgery.

    PubMed

    Jarc, Anthony M; Curet, Myriam J

    2017-03-01

    Effective visualization of the operative field is vital to surgical safety and education. However, additional metrics for visualization are needed to complement other common measures of surgeon proficiency, such as time or errors. Unlike other surgical modalities, robot-assisted minimally invasive surgery (RAMIS) enables data-driven feedback to trainees through measurement of camera adjustments. The purpose of this study was to validate and quantify the importance of novel camera metrics during RAMIS. New (n = 18), intermediate (n = 8), and experienced (n = 13) surgeons completed 25 virtual reality simulation exercises on the da Vinci Surgical System. Three camera metrics were computed for all exercises and compared to conventional efficiency measures. Both camera metrics and efficiency metrics showed construct validity (p < 0.05) across most exercises (camera movement frequency 23/25, camera movement duration 22/25, camera movement interval 19/25, overall score 24/25, completion time 25/25). Camera metrics differentiated new and experienced surgeons across all tasks as well as efficiency metrics. Finally, camera metrics significantly (p < 0.05) correlated with completion time (camera movement frequency 21/25, camera movement duration 21/25, camera movement interval 20/25) and overall score (camera movement frequency 20/25, camera movement duration 19/25, camera movement interval 20/25) for most exercises. We demonstrate construct validity of novel camera metrics and correlation between camera metrics and efficiency metrics across many simulation exercises. We believe camera metrics could be used to improve RAMIS proficiency-based curricula.

  19. Top 10 metrics for life science software good practices.

    PubMed

    Artaza, Haydee; Chue Hong, Neil; Corpas, Manuel; Corpuz, Angel; Hooft, Rob; Jimenez, Rafael C; Leskošek, Brane; Olivier, Brett G; Stourac, Jan; Svobodová Vařeková, Radka; Van Parys, Thomas; Vaughan, Daniel

    2016-01-01

    Metrics for assessing adoption of good development practices are a useful way to ensure that software is sustainable, reusable and functional. Sustainability means that the software used today will be available - and continue to be improved and supported - in the future. We report here an initial set of metrics that measure good practices in software development. This initiative differs from previously developed efforts in being a community-driven grassroots approach where experts from different organisations propose good software practices that have reasonable potential to be adopted by the communities they represent. We not only focus our efforts on understanding and prioritising good practices, we assess their feasibility for implementation and publish them here.

  20. Top 10 metrics for life science software good practices

    PubMed Central

    2016-01-01

    Metrics for assessing adoption of good development practices are a useful way to ensure that software is sustainable, reusable and functional. Sustainability means that the software used today will be available - and continue to be improved and supported - in the future. We report here an initial set of metrics that measure good practices in software development. This initiative differs from previously developed efforts in being a community-driven grassroots approach where experts from different organisations propose good software practices that have reasonable potential to be adopted by the communities they represent. We not only focus our efforts on understanding and prioritising good practices, we assess their feasibility for implementation and publish them here. PMID:27635232

  1. Compressing Test and Evaluation by Using Flow Data for Scalable Network Traffic Analysis

    DTIC Science & Technology

    2014-10-01

    test events, quality of service and other key metrics of military systems and networks are evaluated. Network data captured in standard flow formats...mentioned here. The Ozone Widget Framework (Next Century, n.d.) has proven to be very useful. Also, an extensive, clean, and optimized JavaScript ...library for visualizing many types of data can be found in D3–Data Driven Documents (Bostock, 2013). Quality of Service from Flow Two essential metrics of

  2. Construct validity of individual and summary performance metrics associated with a computer-based laparoscopic simulator.

    PubMed

    Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason

    2014-06-01

    Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.

  3. Evaluation of an Integrated Framework for Biodiversity with a New Metric for Functional Dispersion

    PubMed Central

    Presley, Steven J.; Scheiner, Samuel M.; Willig, Michael R.

    2014-01-01

    Growing interest in understanding ecological patterns from phylogenetic and functional perspectives has driven the development of metrics that capture variation in evolutionary histories or ecological functions of species. Recently, an integrated framework based on Hill numbers was developed that measures three dimensions of biodiversity based on abundance, phylogeny and function of species. This framework is highly flexible, allowing comparison of those diversity dimensions, including different aspects of a single dimension and their integration into a single measure. The behavior of those metrics with regard to variation in data structure has not been explored in detail, yet is critical for ensuring an appropriate match between the concept and its measurement. We evaluated how each metric responds to particular data structures and developed a new metric for functional biodiversity. The phylogenetic metric is sensitive to variation in the topology of phylogenetic trees, including variation in the relative lengths of basal, internal and terminal branches. In contrast, the functional metric exhibited multiple shortcomings: (1) species that are functionally redundant contribute nothing to functional diversity and (2) a single highly distinct species causes functional diversity to approach the minimum possible value. We introduced an alternative, improved metric based on functional dispersion that solves both of these problems. In addition, the new metric exhibited more desirable behavior when based on multiple traits. PMID:25148103

  4. Evaluation of Investments in Science, Technology and Innovation: Applying Scientific and Technical Human Capital Framework for Assessment of Doctoral Students in Cooperative Research Centers

    ERIC Educational Resources Information Center

    Leonchuk, Olena

    2016-01-01

    This dissertation builds on an alternative framework for evaluation of science, technology and innovation (STI) outcomes--the scientific & technical (S&T) human capital which was developed by Bozeman, Dietz and Gaughan (2001). At its core, this framework looks beyond simple economic and publication metrics and instead focuses on…

  5. Accurate evaluation and analysis of functional genomics data and methods

    PubMed Central

    Greene, Casey S.; Troyanskaya, Olga G.

    2016-01-01

    The development of technology capable of inexpensively performing large-scale measurements of biological systems has generated a wealth of data. Integrative analysis of these data holds the promise of uncovering gene function, regulation, and, in the longer run, understanding complex disease. However, their analysis has proved very challenging, as it is difficult to quickly and effectively assess the relevance and accuracy of these data for individual biological questions. Here, we identify biases that present challenges for the assessment of functional genomics data and methods. We then discuss evaluation methods that, taken together, begin to address these issues. We also argue that the funding of systematic data-driven experiments and of high-quality curation efforts will further improve evaluation metrics so that they more-accurately assess functional genomics data and methods. Such metrics will allow researchers in the field of functional genomics to continue to answer important biological questions in a data-driven manner. PMID:22268703

  6. The Steinberg-Bernstein Centre for Minimally Invasive Surgery at McGill University.

    PubMed

    Fried, Gerald M

    2005-12-01

    Surgical skills and simulation centers have been developed in recent years to meet the educational needs of practicing surgeons, residents, and students. The rapid pace of innovation in surgical procedures and technology, as well as the overarching desire to enhance patient safety, have driven the development of simulation technology and new paradigms for surgical education. McGill University has implemented an innovative approach to surgical education in the field of minimally invasive surgery. The goal is to measure surgical performance in the operating room using practical, reliable, and valid metrics, which allow the educational needs of the learner to be established and enable feedback and performance to be tracked over time. The GOALS system and the MISTELS program have been developed to measure operative performance and minimally invasive surgical technical skills in the inanimate skills lab, respectively. The MISTELS laparoscopic simulation-training program has been incorporated as the manual skills education and evaluation component of the Fundamentals of Laparoscopic Surgery program distributed by the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES) and the American College of Surgeons.

  7. Tower cab metrics.

    DOT National Transportation Integrated Search

    2001-01-01

    This report is part of a continuing effort to develop human factors measures for different operational environments in the Federal Aviation Administration Air Traffic Control (ATC) system. Previous research at the William J. Hughes Technical Center R...

  8. Parametric Cost Analysis: A Design Function

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1989-01-01

    Parametric cost analysis uses equations to map measurable system attributes into cost. The measures of the system attributes are called metrics. The equations are called cost estimating relationships (CER's), and are obtained by the analysis of cost and technical metric data of products analogous to those to be estimated. Examples of system metrics include mass, power, failure_rate, mean_time_to_repair, energy _consumed, payload_to_orbit, pointing_accuracy, manufacturing_complexity, number_of_fasteners, and percent_of_electronics_weight. The basic assumption is that a measurable relationship exists between system attributes and the cost of the system. If a function exists, the attributes are cost drivers. Candidates for metrics include system requirement metrics and engineering process metrics. Requirements are constraints on the engineering process. From optimization theory we know that any active constraint generates cost by not permitting full optimization of the objective. Thus, requirements are cost drivers. Engineering processes reflect a projection of the requirements onto the corporate culture, engineering technology, and system technology. Engineering processes are an indirect measure of the requirements and, hence, are cost drivers.

  9. A Survey of Health Management User Objectives Related to Diagnostic and Prognostic Metrics

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Kurtoglu, Tolga; Poll, Scott D.

    2010-01-01

    One of the most prominent technical challenges to effective deployment of health management systems is the vast difference in user objectives with respect to engineering development. In this paper, a detailed survey on the objectives of different users of health management systems is presented. These user objectives are then mapped to the metrics typically encountered in the development and testing of two main systems health management functions: diagnosis and prognosis. Using this mapping, the gaps between user goals and the metrics associated with diagnostics and prognostics are identified and presented with a collection of lessons learned from previous studies that include both industrial and military aerospace applications.

  10. Improving agreement between static method and dynamic formula for driven cast-in-place piles : [technical brief].

    DOT National Transportation Integrated Search

    2013-08-01

    Many transportation facility structures in Wisconsin are founded on driven round, closed-end, steel, pipe piles. The piles are driven to capacity and then filled with concrete. The Wisconsin Department of Transportation (WisDOT) has designed and driv...

  11. Search Pathways: Modeling GeoData Search Behavior to Support Usable Application Development

    NASA Astrophysics Data System (ADS)

    Yarmey, L.; Rosati, A.; Tressel, S.

    2014-12-01

    Recent technical advances have enabled development of new scientific data discovery systems. Metadata brokering, linked data, and other mechanisms allow users to discover scientific data of interes across growing volumes of heterogeneous content. Matching this complex content with existing discovery technologies, people looking for scientific data are presented with an ever-growing array of features to sort, filter, subset, and scan through search returns to help them find what they are looking for. This paper examines the applicability of available technologies in connecting searchers with the data of interest. What metrics can be used to track success given shifting baselines of content and technology? How well do existing technologies map to steps in user search patterns? Taking a user-driven development approach, the team behind the Arctic Data Explorer interdisciplinary data discovery application invested heavily in usability testing and user search behavior analysis. Building on earlier library community search behavior work, models were developed to better define the diverse set of thought processes and steps users took to find data of interest, here called 'search pathways'. This research builds a deeper understanding of the user community that seeks to reuse scientific data. This approach ensures that development decisions are driven by clearly articulated user needs instead of ad hoc technology trends. Initial results from this research will be presented along with lessons learned for other discovery platform development and future directions for informatics research into search pathways.

  12. Environmental, Economic, and Scalability Considerations and Trends of Selected Fuel Economy-Enhancing Biomass-Derived Blendstocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunn, Jennifer B.; Biddy, Mary; Jones, Susanne

    Twenty-four biomass-derived compounds and mixtures, identified based on their physical properties, which could be blended into fuels to improve spark ignition engine fuel economy, were assessed for their economic, technology readiness, and environmental viability. These bio-blendstocks were modeled to be produced biochemically, thermochemically, or through hybrid processes. To carry out the assessment, 17 metrics were developed for which each bio-blendstock was determined to be favorable, neutral, or unfavorable. Cellulosic ethanol was included as a reference case. Overall economic and, to some extent, environmental viability is driven by projected yields for each of these processes. The metrics used in this analysismore » methodology highlight the near-term potential to achieve these targeted yield estimates when considering data quality and current technical readiness for these conversion strategies. Key knowledge gaps included the degree of purity needed for use as a bio-blendstock. Less stringent purification requirements for fuels could cut processing costs and environmental impacts. Additionally, more information is needed on the blending behavior of many of these bio-blendstocks with gasoline to support the technology readiness evaluation. Altogether, the technology to produce many of these blendstocks from biomass is emerging, and as it matures, these assessments must be revisited. Importantly, considering economic, environmental, and technology readiness factors, in addition to physical properties of blendstocks that could be used to boost engine efficiency and fuel economy, in the early stages of project research and development can help spotlight those most likely to be viable in the near term.« less

  13. Environmental, Economic, and Scalability Considerations and Trends of Selected Fuel Economy-Enhancing Biomass-Derived Blendstocks

    DOE PAGES

    Dunn, Jennifer B.; Biddy, Mary; Jones, Susanne; ...

    2017-10-30

    Twenty-four biomass-derived compounds and mixtures, identified based on their physical properties, which could be blended into fuels to improve spark ignition engine fuel economy, were assessed for their economic, technology readiness, and environmental viability. These bio-blendstocks were modeled to be produced biochemically, thermochemically, or through hybrid processes. To carry out the assessment, 17 metrics were developed for which each bio-blendstock was determined to be favorable, neutral, or unfavorable. Cellulosic ethanol was included as a reference case. Overall economic and, to some extent, environmental viability is driven by projected yields for each of these processes. The metrics used in this analysismore » methodology highlight the near-term potential to achieve these targeted yield estimates when considering data quality and current technical readiness for these conversion strategies. Key knowledge gaps included the degree of purity needed for use as a bio-blendstock. Less stringent purification requirements for fuels could cut processing costs and environmental impacts. Additionally, more information is needed on the blending behavior of many of these bio-blendstocks with gasoline to support the technology readiness evaluation. Altogether, the technology to produce many of these blendstocks from biomass is emerging, and as it matures, these assessments must be revisited. Importantly, considering economic, environmental, and technology readiness factors, in addition to physical properties of blendstocks that could be used to boost engine efficiency and fuel economy, in the early stages of project research and development can help spotlight those most likely to be viable in the near term.« less

  14. Application-Driven No-Reference Quality Assessment for Dermoscopy Images With Multiple Distortions.

    PubMed

    Xie, Fengying; Lu, Yanan; Bovik, Alan C; Jiang, Zhiguo; Meng, Rusong

    2016-06-01

    Dermoscopy images often suffer from blur and uneven illumination distortions that occur during acquisition, which can adversely influence consequent automatic image analysis results on potential lesion objects. The purpose of this paper is to deploy an algorithm that can automatically assess the quality of dermoscopy images. Such an algorithm could be used to direct image recapture or correction. We describe an application-driven no-reference image quality assessment (IQA) model for dermoscopy images affected by possibly multiple distortions. For this purpose, we created a multiple distortion dataset of dermoscopy images impaired by varying degrees of blur and uneven illumination. The basis of this model is two single distortion IQA metrics that are sensitive to blur and uneven illumination, respectively. The outputs of these two metrics are combined to predict the quality of multiply distorted dermoscopy images using a fuzzy neural network. Unlike traditional IQA algorithms, which use human subjective score as ground truth, here ground truth is driven by the application, and generated according to the degree of influence of the distortions on lesion analysis. The experimental results reveal that the proposed model delivers accurate and stable quality prediction results for dermoscopy images impaired by multiple distortions. The proposed model is effective for quality assessment of multiple distorted dermoscopy images. An application-driven concept for IQA is introduced, and at the same time, a solution framework for the IQA of multiple distortions is proposed.

  15. Information risk and security modeling

    NASA Astrophysics Data System (ADS)

    Zivic, Predrag

    2005-03-01

    This research paper presentation will feature current frameworks to addressing risk and security modeling and metrics. The paper will analyze technical level risk and security metrics of Common Criteria/ISO15408, Centre for Internet Security guidelines, NSA configuration guidelines and metrics used at this level. Information IT operational standards view on security metrics such as GMITS/ISO13335, ITIL/ITMS and architectural guidelines such as ISO7498-2 will be explained. Business process level standards such as ISO17799, COSO and CobiT will be presented with their control approach to security metrics. Top level, the maturity standards such as SSE-CMM/ISO21827, NSA Infosec Assessment and CobiT will be explored and reviewed. For each defined level of security metrics the research presentation will explore the appropriate usage of these standards. The paper will discuss standards approaches to conducting the risk and security metrics. The research findings will demonstrate the need for common baseline for both risk and security metrics. This paper will show the relation between the attribute based common baseline and corporate assets and controls for risk and security metrics. IT will be shown that such approach spans over all mentioned standards. The proposed approach 3D visual presentation and development of the Information Security Model will be analyzed and postulated. Presentation will clearly demonstrate the benefits of proposed attributes based approach and defined risk and security space for modeling and measuring.

  16. Analytical procedures for determining the impacts of reliability mitigation strategies. [supporting datasets

    DOT National Transportation Integrated Search

    2012-11-30

    The objective of this project was to develop technical relationships between reliability improvement strategies and reliability performance metrics. This project defined reliability, explained the importance of travel time distributions for measuring...

  17. "They may be pixels, but they're MY pixels:" developing a metric of character attachment in role-playing video games.

    PubMed

    Lewis, Melissa L; Weber, René; Bowman, Nicholas David

    2008-08-01

    This paper proposes a new and reliable metric for measuring character attachment (CA), the connection felt by a video game player toward a video game character. Results of construct validity analyses indicate that the proposed CA scale has a significant relationship with self-esteem, addiction, game enjoyment, and time spent playing games; all of these relationships are predicted by theory. Additionally, CA levels for role-playing games differ significantly from CA levels of other character-driven games.

  18. A direct-gradient multivariate index of biotic condition

    USGS Publications Warehouse

    Miranda, Leandro E.; Aycock, J.N.; Killgore, K. J.

    2012-01-01

    Multimetric indexes constructed by summing metric scores have been criticized despite many of their merits. A leading criticism is the potential for investigator bias involved in metric selection and scoring. Often there is a large number of competing metrics equally well correlated with environmental stressors, requiring a judgment call by the investigator to select the most suitable metrics to include in the index and how to score them. Data-driven procedures for multimetric index formulation published during the last decade have reduced this limitation, yet apprehension remains. Multivariate approaches that select metrics with statistical algorithms may reduce the level of investigator bias and alleviate a weakness of multimetric indexes. We investigated the suitability of a direct-gradient multivariate procedure to derive an index of biotic condition for fish assemblages in oxbow lakes in the Lower Mississippi Alluvial Valley. Although this multivariate procedure also requires that the investigator identify a set of suitable metrics potentially associated with a set of environmental stressors, it is different from multimetric procedures because it limits investigator judgment in selecting a subset of biotic metrics to include in the index and because it produces metric weights suitable for computation of index scores. The procedure, applied to a sample of 35 competing biotic metrics measured at 50 oxbow lakes distributed over a wide geographical region in the Lower Mississippi Alluvial Valley, selected 11 metrics that adequately indexed the biotic condition of five test lakes. Because the multivariate index includes only metrics that explain the maximum variability in the stressor variables rather than a balanced set of metrics chosen to reflect various fish assemblage attributes, it is fundamentally different from multimetric indexes of biotic integrity with advantages and disadvantages. As such, it provides an alternative to multimetric procedures.

  19. Development and validation of trauma surgical skills metrics: Preliminary assessment of performance after training.

    PubMed

    Shackelford, Stacy; Garofalo, Evan; Shalin, Valerie; Pugh, Kristy; Chen, Hegang; Pasley, Jason; Sarani, Babak; Henry, Sharon; Bowyer, Mark; Mackenzie, Colin F

    2015-07-01

    Maintaining trauma-specific surgical skills is an ongoing challenge for surgical training programs. An objective assessment of surgical skills is needed. We hypothesized that a validated surgical performance assessment tool could detect differences following a training intervention. We developed surgical performance assessment metrics based on discussion with expert trauma surgeons, video review of 10 experts and 10 novice surgeons performing three vascular exposure procedures and lower extremity fasciotomy on cadavers, and validated the metrics with interrater reliability testing by five reviewers blinded to level of expertise and a consensus conference. We tested these performance metrics in 12 surgical residents (Year 3-7) before and 2 weeks after vascular exposure skills training in the Advanced Surgical Skills for Exposure in Trauma (ASSET) course. Performance was assessed in three areas as follows: knowledge (anatomic, management), procedure steps, and technical skills. Time to completion of procedures was recorded, and these metrics were combined into a single performance score, the Trauma Readiness Index (TRI). Wilcoxon matched-pairs signed-ranks test compared pretraining/posttraining effects. Mean time to complete procedures decreased by 4.3 minutes (from 13.4 minutes to 9.1 minutes). The performance component most improved by the 1-day skills training was procedure steps, completion of which increased by 21%. Technical skill scores improved by 12%. Overall knowledge improved by 3%, with 18% improvement in anatomic knowledge. TRI increased significantly from 50% to 64% with ASSET training. Interrater reliability of the surgical performance assessment metrics was validated with single intraclass correlation coefficient of 0.7 to 0.98. A trauma-relevant surgical performance assessment detected improvements in specific procedure steps and anatomic knowledge taught during a 1-day course, quantified by the TRI. ASSET training reduced time to complete vascular control by one third. Future applications include assessing specific skills in a larger surgeon cohort, assessing military surgical readiness, and quantifying skill degradation with time since training.

  20. A review of training research and virtual reality simulators for the da Vinci surgical system.

    PubMed

    Liu, May; Curet, Myriam

    2015-01-01

    PHENOMENON: Virtual reality simulators are the subject of several recent studies of skills training for robot-assisted surgery. Yet no consensus exists regarding what a core skill set comprises or how to measure skill performance. Defining a core skill set and relevant metrics would help surgical educators evaluate different simulators. This review draws from published research to propose a core technical skill set for using the da Vinci surgeon console. Publications on three commercial simulators were used to evaluate the simulators' content addressing these skills and associated metrics. An analysis of published research suggests that a core technical skill set for operating the surgeon console includes bimanual wristed manipulation, camera control, master clutching to manage hand position, use of third instrument arm, activating energy sources, appropriate depth perception, and awareness of forces applied by instruments. Validity studies of three commercial virtual reality simulators for robot-assisted surgery suggest that all three have comparable content and metrics. However, none have comprehensive content and metrics for all core skills. INSIGHTS: Virtual reality simulation remains a promising tool to support skill training for robot-assisted surgery, yet existing commercial simulator content is inadequate for performing and assessing a comprehensive basic skill set. The results of this evaluation help identify opportunities and challenges that exist for future developments in virtual reality simulation for robot-assisted surgery. Specifically, the inclusion of educational experts in the development cycle alongside clinical and technological experts is recommended.

  1. Intravascular US-Guided Portal Vein Access: Improved Procedural Metrics during TIPS Creation.

    PubMed

    Gipson, Matthew G; Smith, Mitchell T; Durham, Janette D; Brown, Anthony; Johnson, Thor; Ray, Charles E; Gupta, Rajan K; Kondo, Kimi L; Rochon, Paul J; Ryu, Robert K

    2016-08-01

    To evaluate transjugular intrahepatic portosystemic shunt (TIPS) outcomes and procedure metrics with the use of three different image guidance techniques for portal vein (PV) access during TIPS creation. A retrospective review of consecutive patients who underwent TIPS procedures for a range of indications during a 28-month study period identified a population of 68 patients. This was stratified by PV access techniques: fluoroscopic guidance with or without portography (n = 26), PV marker wire guidance (n = 18), or intravascular ultrasound (US) guidance (n = 24). Procedural outcomes and procedural metrics, including radiation exposure, contrast agent volume used, procedure duration, and PV access time, were analyzed. No differences in demographic or procedural characteristics were found among the three groups. Technical success, technical success of the primary planned approach, hemodynamic success, portosystemic gradient, and procedure-related complications were not significantly different among groups. Fluoroscopy time (P = .003), air kerma (P = .01), contrast agent volume (P = .003), and total procedural time (P = .02) were reduced with intravascular US guidance compared with fluoroscopic guidance. Fluoroscopy time (P = .01) and contrast agent volume (P = .02) were reduced with intravascular US guidance compared with marker wire guidance. Intravascular US guidance of PV access during TIPS creation not only facilitates successful TIPS creation in patients with challenging anatomy, as suggested by previous investigations, but also reduces important procedure metrics including radiation exposure, contrast agent volume, and overall procedure duration compared with fluoroscopically guided TIPS creation. Copyright © 2016 SIR. Published by Elsevier Inc. All rights reserved.

  2. The Consortium for Plant Biotechnology Research, Inc. Semi-Annual Technical Report for April 1, 2000 - September 30, 2000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2000-10-02

    Scientific progress reports submitted by university researchers conducting projects funded through CPBR and metrics reports submitted by industry sponsors that provided matching funds to the projects.

  3. Patient-driven hand hygiene audit process at a regional cancer center.

    PubMed

    Bow, E J; Bourrier, V; Trudel, J; Kostiuk, N; McLeod, J M

    2018-01-01

    A patient-driven hand hygiene compliance audit strategy was piloted in a Canadian provincial cancer agency during routine provision of cancer outpatient care by health care providers (physicians, nurses, and health care aides) under conditions where the deployment of an independent external auditor was not feasible. The results of the audit suggest the feasibility of this approach as a routine institutional performance metric. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  4. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies: The Evidence and the Framework.

    PubMed

    Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora

    2017-12-01

    Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to ( a ) catalog feasibility measures/metrics and ( b ) propose a framework. For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization.

  5. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies

    PubMed Central

    Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora

    2017-01-01

    Objective Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to (a) catalog feasibility measures/metrics and (b) propose a framework. Methods For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. Findings We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Conclusions Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization. PMID:29333105

  6. Technical Interchange Meeting Guidelines Breakout

    NASA Technical Reports Server (NTRS)

    Fong, Rob

    2002-01-01

    Along with concept developers, the Systems Evaluation and Assessment (SEA) sub-element of VAMS will develop those scenarios and metrics required for testing the new concepts that reside within the System-Level Integrated Concepts (SLIC) sub-element in the VAMS project. These concepts will come from the NRA process, space act agreements, a university group, and other NASA researchers. The emphasis of those concepts is to increase capacity while at least maintaining the current safety level. The concept providers will initially develop their own scenarios and metrics for self-evaluation. In about a year, the SEA sub-element will become responsible for conducting initial evaluations of the concepts using a common scenario and metric set. This set may derive many components from the scenarios and metrics used by the concept providers. Ultimately, the common scenario\\metric set will be used to help determine the most feasible and beneficial concepts. A set of 15 questions and issues, discussed below, pertaining to the scenario and metric set, and its use for assessing concepts, was submitted by the SEA sub-element for consideration during the breakout session. The questions were divided among the three breakout groups. Each breakout group deliberated on its set of questions and provided a report on its discussion.

  7. Learning Computing Topics in Undergraduate Information Systems Courses: Managing Perceived Difficulty

    ERIC Educational Resources Information Center

    Wall, Jeffrey D.; Knapp, Janice

    2014-01-01

    Learning technical computing skills is increasingly important in our technology driven society. However, learning technical skills in information systems (IS) courses can be difficult. More than 20 percent of students in some technical courses may dropout or fail. Unfortunately, little is known about students' perceptions of the difficulty of…

  8. The Interdisciplinarity of Collaborations in Cognitive Science.

    PubMed

    Bergmann, Till; Dale, Rick; Sattari, Negin; Heit, Evan; Bhat, Harish S

    2017-07-01

    We introduce a new metric for interdisciplinarity, based on co-author publication history. A published article that has co-authors with quite different publication histories can be deemed relatively "interdisciplinary," in that the article reflects a convergence of previous research in distinct sets of publication outlets. In recent work, we have shown that this interdisciplinarity metric can predict citations. Here, we show that the journal Cognitive Science tends to contain collaborations that are relatively high on this interdisciplinarity metric, at about the 80th percentile of all journals across both social and natural sciences. Following on Goldstone and Leydesdorff (2006), we describe how scientometric tools provide a valuable means of assessing the role of cognitive science in broader scientific work, and also as a tool to investigate teamwork and distributed cognition. We describe how data-driven metrics of this kind may facilitate this exploration without relying upon rapidly changing discipline and topic keywords associated with publications. Copyright © 2016 Cognitive Science Society, Inc.

  9. SI (Metric) handbook

    NASA Technical Reports Server (NTRS)

    Artusa, Elisa A.

    1994-01-01

    This guide provides information for an understanding of SI units, symbols, and prefixes; style and usage in documentation in both the US and in the international business community; conversion techniques; limits, fits, and tolerance data; and drawing and technical writing guidelines. Also provided is information of SI usage for specialized applications like data processing and computer programming, science, engineering, and construction. Related information in the appendixes include legislative documents, historical and biographical data, a list of metric documentation, rules for determining significant digits and rounding, conversion factors, shorthand notation, and a unit index.

  10. Integrated Resilient Aircraft Control Project Full Scale Flight Validation

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.

    2009-01-01

    Objective: Provide validation of adaptive control law concepts through full scale flight evaluation. Technical Approach: a) Engage failure mode - destabilizing or frozen surface. b) Perform formation flight and air-to-air tracking tasks. Evaluate adaptive algorithm: a) Stability metrics. b) Model following metrics. Full scale flight testing provides an ability to validate different adaptive flight control approaches. Full scale flight testing adds credence to NASA's research efforts. A sustained research effort is required to remove the road blocks and provide adaptive control as a viable design solution for increased aircraft resilience.

  11. A Case Study: Analyzing City Vitality with Four Pillars of Activity-Live, Work, Shop, and Play.

    PubMed

    Griffin, Matt; Nordstrom, Blake W; Scholes, Jon; Joncas, Kate; Gordon, Patrick; Krivenko, Elliott; Haynes, Winston; Higdon, Roger; Stewart, Elizabeth; Kolker, Natali; Montague, Elizabeth; Kolker, Eugene

    2016-03-01

    This case study evaluates and tracks vitality of a city (Seattle), based on a data-driven approach, using strategic, robust, and sustainable metrics. This case study was collaboratively conducted by the Downtown Seattle Association (DSA) and CDO Analytics teams. The DSA is a nonprofit organization focused on making the city of Seattle and its Downtown a healthy and vibrant place to Live, Work, Shop, and Play. DSA primarily operates through public policy advocacy, community and business development, and marketing. In 2010, the organization turned to CDO Analytics ( cdoanalytics.org ) to develop a process that can guide and strategically focus DSA efforts and resources for maximal benefit to the city of Seattle and its Downtown. CDO Analytics was asked to develop clear, easily understood, and robust metrics for a baseline evaluation of the health of the city, as well as for ongoing monitoring and comparisons of the vitality, sustainability, and growth. The DSA and CDO Analytics teams strategized on how to effectively assess and track the vitality of Seattle and its Downtown. The two teams filtered a variety of data sources, and evaluated the veracity of multiple diverse metrics. This iterative process resulted in the development of a small number of strategic, simple, reliable, and sustainable metrics across four pillars of activity: Live, Work, Shop, and Play. Data during the 5 years before 2010 were used for the development of the metrics and model and its training, and data during the 5 years from 2010 and on were used for testing and validation. This work enabled DSA to routinely track these strategic metrics, use them to monitor the vitality of Downtown Seattle, prioritize improvements, and identify new value-added programs. As a result, the four-pillar approach became an integral part of the data-driven decision-making and execution of the Seattle community's improvement activities. The approach described in this case study is actionable, robust, inexpensive, and easy to adopt and sustain. It can be applied to cities, districts, counties, regions, states, or countries, enabling cross-comparisons and improvements of vitality, sustainability, and growth.

  12. SU-F-J-84: Comparison of Quantitative Deformable Image Registration Evaluation Tools: Application to Prostate IGART

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dogan, N; Weiss, E; Sleeman, W

    Purpose: Errors in displacement vector fields (DVFs) generated by Deformable Image Registration (DIR) algorithms can give rise to significant uncertainties in contour propagation and dose accumulation in Image-Guided Adaptive Radiotherapy (IGART). The purpose of this work is to assess the accuracy of two DIR algorithms using a variety of quality metrics for prostate IGART. Methods: Pelvic CT images were selected from an anonymized database of nineteen prostate patients who underwent 8–12 serial scans during radiotherapy. Prostate, bladder, and rectum were contoured on 34 image-sets for three patients by the same physician. The planning CT was deformably-registered to daily CT usingmore » three variants of the Small deformation Inverse Consistent Linear Elastic (SICLE) algorithm: Grayscale-driven (G), Contour-driven (C, which utilizes segmented structures to drive DIR), combined (G+C); and also grayscale ITK demons (Gd). The accuracy of G, C, G+C SICLE and Gd registrations were evaluated using a new metric Edge Gradient Distance to Agreement (EGDTA) and other commonly-used metrics such as Pearson Correlation Coefficient (PCC), Dice Similarity Index (DSI) and Hausdorff Distance (HD). Results: C and G+C demonstrated much better performance at organ boundaries, revealing the lowest HD and highest DSI, in prostate, bladder and rectum. G+C demonstrated the lowest mean EGDTA (1.14 mm), which corresponds to highest registration quality, compared to G and C DVFs (1.16 and 2.34 mm). However, demons DIR showed the best overall performance, revealing lowest EGDTA (0.73 mm) and highest PCC (0.85). Conclusion: As expected, both C- and C+G SICLE more accurately reproduce manually-contoured target datasets than G-SICLE or Gd using HD and DSI metrics. In general, the Gd appears to have difficulty reproducing large daily position and shape changes in the rectum and bladder. However, Gd outperforms SICLE in terms of EGDTA and PCC metrics, possibly at the expense of topological quality of the estimated DVFs.« less

  13. Missing data and technical variability in single-cell RNA-sequencing experiments.

    PubMed

    Hicks, Stephanie C; Townes, F William; Teng, Mingxiang; Irizarry, Rafael A

    2017-11-06

    Until recently, high-throughput gene expression technology, such as RNA-Sequencing (RNA-seq) required hundreds of thousands of cells to produce reliable measurements. Recent technical advances permit genome-wide gene expression measurement at the single-cell level. Single-cell RNA-Seq (scRNA-seq) is the most widely used and numerous publications are based on data produced with this technology. However, RNA-seq and scRNA-seq data are markedly different. In particular, unlike RNA-seq, the majority of reported expression levels in scRNA-seq are zeros, which could be either biologically-driven, genes not expressing RNA at the time of measurement, or technically-driven, genes expressing RNA, but not at a sufficient level to be detected by sequencing technology. Another difference is that the proportion of genes reporting the expression level to be zero varies substantially across single cells compared to RNA-seq samples. However, it remains unclear to what extent this cell-to-cell variation is being driven by technical rather than biological variation. Furthermore, while systematic errors, including batch effects, have been widely reported as a major challenge in high-throughput technologies, these issues have received minimal attention in published studies based on scRNA-seq technology. Here, we use an assessment experiment to examine data from published studies and demonstrate that systematic errors can explain a substantial percentage of observed cell-to-cell expression variability. Specifically, we present evidence that some of these reported zeros are driven by technical variation by demonstrating that scRNA-seq produces more zeros than expected and that this bias is greater for lower expressed genes. In addition, this missing data problem is exacerbated by the fact that this technical variation varies cell-to-cell. Then, we show how this technical cell-to-cell variability can be confused with novel biological results. Finally, we demonstrate and discuss how batch-effects and confounded experiments can intensify the problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. NREL: International Activities - Working with Us

    Science.gov Websites

    opportunities to develop technology partnerships and researcher-driven collaboration. Technology Partnerships expertise, including our energy analysis capabilities. Researcher-Driven Collaboration NREL scientists formal means, such as collaboration on specific technical topics. NREL researchers also actively

  15. Connecting Seasonal Riparian Buffer Metrics and Nitrogen Concentrations in a Pulse-Driven Agricultural System

    EPA Science Inventory

    Riparian buffers have been well studied as best management practices for nutrient reduction at field scales yet their effectiveness for bettering water quality at watershed scales has been difficult to determine. Seasonal dynamics of the stream network are often overlooked when ...

  16. CONTACT: An Air Force technical report on military satellite control technology

    NASA Astrophysics Data System (ADS)

    Weakley, Christopher K.

    1993-07-01

    This technical report focuses on Military Satellite Control Technologies and their application to the Air Force Satellite Control Network (AFSCN). This report is a compilation of articles that provide an overview of the AFSCN and the Advanced Technology Program, and discusses relevant technical issues and developments applicable to the AFSCN. Among the topics covered are articles on Future Technology Projections; Future AFSCN Topologies; Modeling of the AFSCN; Wide Area Communications Technology Evolution; Automating AFSCN Resource Scheduling; Health & Status Monitoring at Remote Tracking Stations; Software Metrics and Tools for Measuring AFSCN Software Performance; Human-Computer Interface Working Group; Trusted Systems Workshop; and the University Technical Interaction Program. In addition, Key Technology Area points of contact are listed in the report.

  17. What does "Diversity" Mean for Public Engagement in Science? A New Metric for Innovation Ecosystem Diversity.

    PubMed

    Özdemir, Vural; Springer, Simon

    2018-03-01

    Diversity is increasingly at stake in early 21st century. Diversity is often conceptualized across ethnicity, gender, socioeconomic status, sexual preference, and professional credentials, among other categories of difference. These are important and relevant considerations and yet, they are incomplete. Diversity also rests in the way we frame questions long before answers are sought. Such diversity in the framing (epistemology) of scientific and societal questions is important for they influence the types of data, results, and impacts produced by research. Errors in the framing of a research question, whether in technical science or social science, are known as type III errors, as opposed to the better known type I (false positives) and type II errors (false negatives). Kimball defined "error of the third kind" as giving the right answer to the wrong problem. Raiffa described the type III error as correctly solving the wrong problem. Type III errors are upstream or design flaws, often driven by unchecked human values and power, and can adversely impact an entire innovation ecosystem, waste money, time, careers, and precious resources by focusing on the wrong or incorrectly framed question and hypothesis. Decades may pass while technology experts, scientists, social scientists, funding agencies and management consultants continue to tackle questions that suffer from type III errors. We propose a new diversity metric, the Frame Diversity Index (FDI), based on the hitherto neglected diversities in knowledge framing. The FDI would be positively correlated with epistemological diversity and technological democracy, and inversely correlated with prevalence of type III errors in innovation ecosystems, consortia, and knowledge networks. We suggest that the FDI can usefully measure (and prevent) type III error risks in innovation ecosystems, and help broaden the concepts and practices of diversity and inclusion in science, technology, innovation and society.

  18. Development of the McGill simulator for endoscopic sinus surgery: a new high-fidelity virtual reality simulator for endoscopic sinus surgery.

    PubMed

    Varshney, Rickul; Frenkiel, Saul; Nguyen, Lily H P; Young, Meredith; Del Maestro, Rolando; Zeitouni, Anthony; Tewfik, Marc A

    2014-01-01

    The technical challenges of endoscopic sinus surgery (ESS) and the high risk of complications support the development of alternative modalities to train residents in these procedures. Virtual reality simulation is becoming a useful tool for training the skills necessary for minimally invasive surgery; however, there are currently no ESS virtual reality simulators available with valid evidence supporting their use in resident education. Our aim was to develop a new rhinology simulator, as well as to define potential performance metrics for trainee assessment. The McGill simulator for endoscopic sinus surgery (MSESS), a new sinus surgery virtual reality simulator with haptic feedback, was developed (a collaboration between the McGill University Department of Otolaryngology-Head and Neck Surgery, the Montreal Neurologic Institute Simulation Lab, and the National Research Council of Canada). A panel of experts in education, performance assessment, rhinology, and skull base surgery convened to identify core technical abilities that would need to be taught by the simulator, as well as performance metrics to be developed and captured. The MSESS allows the user to perform basic sinus surgery skills, such as an ethmoidectomy and sphenoidotomy, through the use of endoscopic tools in a virtual nasal model. The performance metrics were developed by an expert panel and include measurements of safety, quality, and efficiency of the procedure. The MSESS incorporates novel technological advancements to create a realistic platform for trainees. To our knowledge, this is the first simulator to combine novel tools such as the endonasal wash and elaborate anatomic deformity with advanced performance metrics for ESS.

  19. Numerical studies and metric development for validation of magnetohydrodynamic models on the HIT-SI experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, C., E-mail: hansec@uw.edu; Columbia University, New York, New York 10027; Victor, B.

    We present application of three scalar metrics derived from the Biorthogonal Decomposition (BD) technique to evaluate the level of agreement between macroscopic plasma dynamics in different data sets. BD decomposes large data sets, as produced by distributed diagnostic arrays, into principal mode structures without assumptions on spatial or temporal structure. These metrics have been applied to validation of the Hall-MHD model using experimental data from the Helicity Injected Torus with Steady Inductive helicity injection experiment. Each metric provides a measure of correlation between mode structures extracted from experimental data and simulations for an array of 192 surface-mounted magnetic probes. Numericalmore » validation studies have been performed using the NIMROD code, where the injectors are modeled as boundary conditions on the flux conserver, and the PSI-TET code, where the entire plasma volume is treated. Initial results from a comprehensive validation study of high performance operation with different injector frequencies are presented, illustrating application of the BD method. Using a simplified (constant, uniform density and temperature) Hall-MHD model, simulation results agree with experimental observation for two of the three defined metrics when the injectors are driven with a frequency of 14.5 kHz.« less

  20. Quality assurance software inspections at NASA Ames: Metrics for feedback and modification

    NASA Technical Reports Server (NTRS)

    Wenneson, G.

    1985-01-01

    Software inspections are a set of formal technical review procedures held at selected key points during software development in order to find defects in software documents--is described in terms of history, participants, tools, procedures, statistics, and database analysis.

  1. New Roads for Patron-Driven E-Books: Collection Development and Technical Services Implications of a Patron-Driven Acquisitions Pilot at Rutgers

    ERIC Educational Resources Information Center

    De Fino, Melissa; Lo, Mei Ling

    2011-01-01

    Collection development librarians have long struggled to meet user demands for new titles. Too often, required resources are not purchased, whereas some purchased resources do not circulate. E-books selected through patron-driven plans are a solution but present new challenges for both selectors and catalogers. Radical changes to traditional…

  2. Can academic radiology departments become more efficient and cost less?

    PubMed

    Seltzer, S E; Saini, S; Bramson, R T; Kelly, P; Levine, L; Chiango, B F; Jordan, P; Seth, A; Elton, J; Elrick, J; Rosenthal, D; Holman, B L; Thrall, J H

    1998-11-01

    To determine how successful two large academic radiology departments have been in responding to market-driven pressures to reduce costs and improve productivity by downsizing their technical and support staffs while maintaining or increasing volume. A longitudinal study was performed in which benchmarking techniques were used to assess the changes in cost and productivity of the two departments for 5 years (fiscal years 1992-1996). Cost per relative value unit and relative value units per full-time equivalent employee were tracked. Substantial cost reduction and productivity enhancement were realized as linear improvements in two key metrics, namely, cost per relative value unit (decline of 19.0% [decline of $7.60 on a base year cost of $40.00] to 28.8% [$12.18 of $42.21]; P < or = .001) and relative value unit per full-time equivalent employee (increase of 46.0% [increase of 759.55 units over a base year productivity of 1,651.45 units] to 55.8% [968.28 of 1,733.97 units]; P < .001), during the 5 years of study. Academic radiology departments have proved that they can "do more with less" over a sustained period.

  3. Development of Management Metrics for Research and Technology

    NASA Technical Reports Server (NTRS)

    Sheskin, Theodore J.

    2003-01-01

    Professor Ted Sheskin from CSU will be tasked to research and investigate metrics that can be used to determine the technical progress for advanced development and research tasks. These metrics will be implemented in a software environment that hosts engineering design, analysis and management tools to be used to support power system and component research work at GRC. Professor Sheskin is an Industrial Engineer and has been involved in issues related to management of engineering tasks and will use his knowledge from this area to allow extrapolation into the research and technology management area. Over the course of the summer, Professor Sheskin will develop a bibliography of management papers covering current management methods that may be applicable to research management. At the completion of the summer work we expect to have him recommend a metric system to be reviewed prior to implementation in the software environment. This task has been discussed with Professor Sheskin and some review material has already been given to him.

  4. Critical insights for a sustainability framework to address integrated community water services: Technical metrics and approaches.

    PubMed

    Xue, Xiaobo; Schoen, Mary E; Ma, Xin Cissy; Hawkins, Troy R; Ashbolt, Nicholas J; Cashdollar, Jennifer; Garland, Jay

    2015-06-15

    Planning for sustainable community water systems requires a comprehensive understanding and assessment of the integrated source-drinking-wastewater systems over their life-cycles. Although traditional life cycle assessment and similar tools (e.g. footprints and emergy) have been applied to elements of these water services (i.e. water resources, drinking water, stormwater or wastewater treatment alone), we argue for the importance of developing and combining the system-based tools and metrics in order to holistically evaluate the complete water service system based on the concept of integrated resource management. We analyzed the strengths and weaknesses of key system-based tools and metrics, and discuss future directions to identify more sustainable municipal water services. Such efforts may include the need for novel metrics that address system adaptability to future changes and infrastructure robustness. Caution is also necessary when coupling fundamentally different tools so to avoid misunderstanding and consequently misleading decision-making. Published by Elsevier Ltd.

  5. Engaging with Assessment: Increasing Student Engagement through Continuous Assessment

    ERIC Educational Resources Information Center

    Holmes, Naomi

    2018-01-01

    Student engagement is intrinsically linked to two important metrics in learning: student satisfaction and the quality of the student experience. One of the ways that engagement can be influenced is through careful curriculum design. Using the knowledge that many students are "assessment-driven," a low-stakes continuous weekly summative…

  6. Macroscale hydrologic modeling of ecologically relevant flow metrics

    Treesearch

    Seth J. Wenger; Charles H. Luce; Alan F. Hamlet; Daniel J. Isaak; Helen M. Neville

    2010-01-01

    Stream hydrology strongly affects the structure of aquatic communities. Changes to air temperature and precipitation driven by increased greenhouse gas concentrations are shifting timing and volume of streamflows potentially affecting these communities. The variable infiltration capacity (VIC) macroscale hydrologic model has been employed at regional scales to describe...

  7. Science Goal Monitor: Science Goal Driven Automation for NASA Missions

    NASA Technical Reports Server (NTRS)

    Koratkar, Anuradha; Grosvenor, Sandy; Jung, John; Pell, Melissa; Matusow, David; Bailyn, Charles

    2004-01-01

    Infusion of automation technologies into NASA s future missions will be essential because of the need to: (1) effectively handle an exponentially increasing volume of scientific data, (2) successfully meet dynamic, opportunistic scientific goals and objectives, and (3) substantially reduce mission operations staff and costs. While much effort has gone into automating routine spacecraft operations to reduce human workload and hence costs, applying intelligent automation to the science side, i.e., science data acquisition, data analysis and reactions to that data analysis in a timely and still scientifically valid manner, has been relatively under-emphasized. In order to introduce science driven automation in missions, we must be able to: capture and interpret the science goals of observing programs, represent those goals in machine interpretable language; and allow spacecrafts onboard systems to autonomously react to the scientist's goals. In short, we must teach our platforms to dynamically understand, recognize, and react to the scientists goals. The Science Goal Monitor (SGM) project at NASA Goddard Space Flight Center is a prototype software tool being developed to determine the best strategies for implementing science goal driven automation in missions. The tools being developed in SGM improve the ability to monitor and react to the changing status of scientific events. The SGM system enables scientists to specify what to look for and how to react in descriptive rather than technical terms. The system monitors streams of science data to identify occurrences of key events previously specified by the scientist. When an event occurs, the system autonomously coordinates the execution of the scientist s desired reactions. Through SGM, we will improve om understanding about the capabilities needed onboard for success, develop metrics to understand the potential increase in science returns, and develop an operational prototype so that the perceived risks associated with increased use of automation can be reduced.

  8. A Technical Analysis Information Fusion Approach for Stock Price Analysis and Modeling

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    In this paper, we address the problem of technical analysis information fusion in improving stock market index-level prediction. We present an approach for analyzing stock market price behavior based on different categories of technical analysis metrics and a multiple predictive system. Each category of technical analysis measures is used to characterize stock market price movements. The presented predictive system is based on an ensemble of neural networks (NN) coupled with particle swarm intelligence for parameter optimization where each single neural network is trained with a specific category of technical analysis measures. The experimental evaluation on three international stock market indices and three individual stocks show that the presented ensemble-based technical indicators fusion system significantly improves forecasting accuracy in comparison with single NN. Also, it outperforms the classical neural network trained with index-level lagged values and NN trained with stationary wavelet transform details and approximation coefficients. As a result, technical information fusion in NN ensemble architecture helps improving prediction accuracy.

  9. NASA Technical Standards Program

    NASA Technical Reports Server (NTRS)

    Gill, Paul S.; Vaughan, William W.; Parker, Nelson C. (Technical Monitor)

    2002-01-01

    The NASA Technical Standards Program was officially established in 1997 as result of a directive issued by the Administrator. It is responsible for Agency wide technical standards development, adoption (endorsement), and conversion of Center-unique standards for Agency wide use. One major element of the Program is the review of NSA technical standards products and replacement with non-Government Voluntary Consensus Standards in accordance with directions issued by the Office of Management and Budget. As part of the Program's function, it developed a NASA Integrated Technical Standards Initiative that consists of and Agency wide full-text system, standards update notification system, and lessons learned-standards integration system. The Program maintains a 'one stop-shop' Website for technical standards ad related information on aerospace materials, etc. This paper provides information on the development, current status, and plans for the NAS Technical Standards Program along with metrics on the utility of the products provided to both users within the nasa.gov Domain and the Public Domain.

  10. NASA Technical Standards Program

    NASA Technical Reports Server (NTRS)

    Gill, Paul S.; Vaughan, WIlliam W.

    2003-01-01

    The NASA Technical Standards Program was officially established in 1997 as result of a directive issued by the Administrator. It is responsible for Agency wide technical standards development, adoption (endorsement), and conversion of Center-unique standards for Agency wide use. One major element of the Program is the review of NSA technical standards products and replacement with non-Government Voluntary Consensus Standards in accordance with directions issued by the Office of Management and Budget. As part of the Program s function, it developed a NASA Integrated Technical Standards Initiative that consists of and Agency wide full-text system, standards update notification system, and lessons learned - standards integration system. The Program maintains a "one stop-shop" Website for technical standards ad related information on aerospace materials, etc. This paper provides information on the development, current status, and plans for the NAS Technical Standards Program along with metrics on the utility of the products provided to both users within the nasa.gov Domain and the Public Domain.

  11. 76 FR 48152 - Commercial Building Asset Rating Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-08

    ...: Occupancy schedule. HVAC system operation. Hot water use. Both the user-entered and the internally defined.... Technical Support Full documentation of the rating methodology would be available online for public review... welcome. Potential for Additional Supported Options While a national performance metric and rating system...

  12. Challenges for PISA

    ERIC Educational Resources Information Center

    Schleicher, Andreas

    2016-01-01

    The OECD Programme for International Student Assessment (PISA) provides a framework in which over 80 countries collaborate to build advanced global metrics to assess the knowledge, skills and character attributes of the students. The design of assessments poses major conceptual and technical challenges, as successful learning. Beyond a sound…

  13. 16 CFR 1209.2 - Definitions and measurements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) As used in this part 1209, Cellulose insulation means cellulosic fiber, loose fill, thermal... with the technical requirements of this standard, the figures are given in the metric system of measurement. The inch-pound system approximations of these figures are provided in parentheses for convenience...

  14. 16 CFR 1209.2 - Definitions and measurements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) As used in this part 1209, Cellulose insulation means cellulosic fiber, loose fill, thermal... with the technical requirements of this standard, the figures are given in the metric system of measurement. The inch-pound system approximations of these figures are provided in parentheses for convenience...

  15. Technical Guidance for Constructing a Human Well-Being ...

    EPA Pesticide Factsheets

    The U.S. Environmental Protection Agency (EPA) Office of Research and Development’s Sustainable and Healthy Communities Research Program (EPA 2015) developed the Human Well-being Index (HWBI) as an integrative measure of economic, social, and environmental contributions to well-being. The HWBI is composed of indicators and metrics representing eight domains of well-being: connection to nature, cultural fulfillment, education, health, leisure time, living standards, safety and security, and social cohesion. The domains and indicators in the HWBI were selected to provide a well-being framework that is broadly applicable to many different populations and communities, and can be customized using community-specific metrics. A primary purpose of this report is to adapt the US Human Well-Being Index (HWBI) to quantify human well-being for Puerto Rico. Additionally, our adaptation of the HWBI for Puerto Rico provides an example of how the HWBI can be adapted to different communities and technical guidance on processing data and calculating index using R.

  16. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    PubMed

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  17. Meta-analysis of the technical performance of an imaging procedure: Guidelines and statistical methodology

    PubMed Central

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2017-01-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353

  18. Gravity induced wave function collapse

    NASA Astrophysics Data System (ADS)

    Gasbarri, G.; Toroš, M.; Donadi, S.; Bassi, A.

    2017-11-01

    Starting from an idea of S. L. Adler [in Quantum Nonlocality and Reality: 50 Years of Bell's Theorem, edited by M. Bell and S. Gao (Cambridge University Press, Cambridge, England 2016)], we develop a novel model of gravity induced spontaneous wave function collapse. The collapse is driven by complex stochastic fluctuations of the spacetime metric. After deriving the fundamental equations, we prove the collapse and amplification mechanism, the two most important features of a consistent collapse model. Under reasonable simplifying assumptions, we constrain the strength ξ of the complex metric fluctuations with available experimental data. We show that ξ ≥10-26 in order for the model to guarantee classicality of macro-objects, and at the same time ξ ≤10-20 in order not to contradict experimental evidence. As a comparison, in the recent discovery of gravitational waves in the frequency range 35 to 250 Hz, the (real) metric fluctuations reach a peak of ξ ˜10-21.

  19. Article-level assessment of influence and translation in biomedical research

    PubMed Central

    Santangelo, George M.

    2017-01-01

    Given the vast scale of the modern scientific enterprise, it can be difficult for scientists to make judgments about the work of others through careful analysis of the entirety of the relevant literature. This has led to a reliance on metrics that are mathematically flawed and insufficiently diverse to account for the variety of ways in which investigators contribute to scientific progress. An urgent, critical first step in solving this problem is replacing the Journal Impact Factor with an article-level alternative. The Relative Citation Ratio (RCR), a metric that was designed to serve in that capacity, measures the influence of each publication on its respective area of research. RCR can serve as one component of a multifaceted metric that provides an effective data-driven supplement to expert opinion. Developing validated methods that quantify scientific progress can help to optimize the management of research investments and accelerate the acquisition of knowledge that improves human health. PMID:28559438

  20. Critical insights for a sustainability framework to address integrated community water services: Technical metrics and approaches

    EPA Science Inventory

    Planning for sustainable community water systems requires a comprehensive understanding and assessment of the integrated source-drinking-wastewater systems over their life-cycles. Although traditional life cycle assessment and similar tools (e.g. footprints and emergy) have been ...

  1. Physiological Metrics of Mental Workload: A Review of Recent Progress

    DTIC Science & Technology

    1990-06-01

    been found to be more resistant to vigilance decrements than stabiles ( Hastrup , 1979; Sostek, 1978; Vossel & Rossman, 1984), respond more quickly in...NASA workload ratings: A paper and pencil package (NASA Technical Report). Moffett Field, CA: Ames Research Center. Hastrup , J. (1979). Effects of

  2. Improving Space Project Cost Estimating with Engineering Management Variables

    NASA Technical Reports Server (NTRS)

    Hamaker, Joseph W.; Roth, Axel (Technical Monitor)

    2001-01-01

    Current space project cost models attempt to predict space flight project cost via regression equations, which relate the cost of projects to technical performance metrics (e.g. weight, thrust, power, pointing accuracy, etc.). This paper examines the introduction of engineering management parameters to the set of explanatory variables. A number of specific engineering management variables are considered and exploratory regression analysis is performed to determine if there is statistical evidence for cost effects apart from technical aspects of the projects. It is concluded that there are other non-technical effects at work and that further research is warranted to determine if it can be shown that these cost effects are definitely related to engineering management.

  3. A condition metric for Eucalyptus woodland derived from expert evaluations.

    PubMed

    Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D

    2018-02-01

    The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.

  4. The NASA Scientific and Technical Information Program: Exploring challenges, creating opportunities

    NASA Technical Reports Server (NTRS)

    Sepic, Ronald P.

    1993-01-01

    The NASA Scientific and Technical Information (STI) Program offers researchers access to the world's largest collection of aerospace information. An overview of Program activities, products and services, and new directions is presented. The R&D information cycle is outlined and specific examples of the NASA STI Program in practice are given. Domestic and international operations and technology transfer activities are reviewed and an agenda for the STI Program NASA-wide is presented. Finally, the incorporation of Total Quality Management and evaluation metrics into the STI Program is discussed.

  5. Technical Approach for In Situ Biological Treatment Research: Bench- Scale Experiments

    DTIC Science & Technology

    1993-08-01

    1 CONVERSION FACTORS, NON-SI TO SI (METRIC) UNITS OF MEASUREMENT . . 5 PART I: INTRODUCTION...141 REFERENCES ....................... .............................. 142 TABLES 1 -4 APPENDIX A: IN SITU IMPLEMENTATION CASE STUDIES...TREATMENT RESEARCH: BENCH-SCALE EXPERIMENTS PART I: INTRODUCTION Background 1 . Many US Army installations have areas of contamination requiring

  6. IT Metrics and Money: One Approach to Public Accountability

    ERIC Educational Resources Information Center

    Daigle, Stephen L.

    2004-01-01

    Performance measurement can be a difficult political as well as technical challenge for educational institutions at all levels. Performance-based budgeting can raise the stakes still higher by linking resource allocation to a public "report card." The 23-campus system of the California State University (CSU) accepted each of these…

  7. Developing Metrics for Effective Teaching in Agricultural Education

    ERIC Educational Resources Information Center

    Lawver, Rebecca G.; McKim, Billy R.; Smith, Amy R.; Aschenbrener, Mollie S.; Enns, Kellie

    2016-01-01

    Research on effective teaching has been conducted in a variety of settings for more than 40 years. This study offers direction for future effective teaching research in secondary agricultural education and has implications for career and technical education. Specifically, 142 items consisting of characteristics, behaviors, and/or techniques…

  8. What Philanthropy's Paradigm Shift Means for Higher Ed Fundraising

    ERIC Educational Resources Information Center

    McCully, George

    2015-01-01

    This is an unprecedented era of human history, in which simultaneous transformations of every technically advanced field are being driven by the powerful technological revolution in information and communications. Technically, these transformative changes are "paradigm shifts"--a distinct kind of historical change in which the governing…

  9. 76 FR 77565 - Biweekly Notice; Applications and Amendments to Facility Operating Licenses Involving No...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-13

    ... assure that the emergency diesel generator's diesel driven cooling water pumps perform their required... generators will provide required electrical power as assumed in the accident analyses and the cooling water... Technical Specifications to require an adequate emergency diesel generator and diesel driven cooling water...

  10. Development of NASA Technical Standards Program Relative to Enhancing Engineering Capabilities

    NASA Technical Reports Server (NTRS)

    Gill, Paul S.; Vaughan, William W.

    2003-01-01

    The enhancement of engineering capabilities is an important aspect of any organization; especially those engaged in aerospace development activities. Technical Standards are one of the key elements of this endeavor. The NASA Technical Standards Program was formed in 1997 in response to the NASA Administrator s directive to develop an Agencywide Technical Standards Program. The Program s principal objective involved the converting Center-unique technical standards into Agency wide standards and the adoption/endorsement of non-Government technical standards in lieu of government standards. In the process of these actions, the potential for further enhancement of the Agency s engineering capabilities was noted relative to value of being able to access Agencywide the necessary full-text technical standards, standards update notifications, and integration of lessons learned with technical standards, all available to the user from one Website. This was accomplished and is now being enhanced based on feedbacks from the Agency's engineering staff and supporting contractors. This paper addresses the development experiences with the NASA Technical Standards Program and the enhancement of the Agency's engineering capabilities provided by the Program s products. Metrics are provided on significant aspects of the Program.

  11. Diagram of the Saturn V Launch Vehicle in Metric

    NASA Technical Reports Server (NTRS)

    1971-01-01

    This is a good cutaway diagram of the Saturn V launch vehicle showing the three stages, the instrument unit, and the Apollo spacecraft. The chart on the right presents the basic technical data in clear metric detail. The Saturn V is the largest and most powerful launch vehicle in the United States. The towering, 111 meter, Saturn V was a multistage, multiengine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams. Development of the Saturn V was the responsibility of the Marshall Space Flight Center at Huntsville, Alabama, directed by Dr. Wernher von Braun.

  12. Earned Value Management Considering Technical Readiness Level and Its Application to New Space Launcher Program

    NASA Astrophysics Data System (ADS)

    Choi, Young-In; Ahn, Jaemyung

    2018-04-01

    Earned value management (EVM) is a methodology for monitoring and controlling the performance of a project based on a comparison between planned and actual cost/schedule. This study proposes a concept of hybrid earned value management (H-EVM) that integrates the traditional EVM metrics with information on the technology readiness level. The proposed concept can reflect the progress of a project in a sensitive way and provides short-term perspective complementary to the traditional EVM metrics. A two-dimensional visualization on the cost/schedule status of a project reflecting both of the traditional EVM (long-term perspective) and the proposed H-EVM (short-term perspective) indices is introduced. A case study on the management of a new space launch vehicle development program is conducted to demonstrate the effectiveness of the proposed H-EVM concept, associated metrics, and the visualization technique.

  13. Crew and Thermal Systems Division Strategic Communications Initiatives in Support of NASA's Strategic Goals: Fiscal Year 2012 Summary and Initial Fiscal Year 2013 Metrics

    NASA Technical Reports Server (NTRS)

    Paul, Heather L.

    2013-01-01

    The NASA strategic plan includes overarching strategies to inspire students through interactions with NASA people and projects, and to expand partnerships with industry and academia around the world. The NASA Johnson Space Center Crew and Thermal Systems Division (CTSD) actively supports these NASA initiatives. At the end of fiscal year 2011, CTSD created a strategic communications team to communicate CTSD capabilities, technologies, and personnel to internal NASA and external technical audiences for collaborative and business development initiatives, and to students, educators, and the general public for education and public outreach efforts. The strategic communications initiatives implemented in fiscal year 2012 resulted in 707 in-reach, outreach, and commercialization events with 39,731 participant interactions. This paper summarizes the CTSD Strategic Communications metrics for fiscal year 2012 and provides metrics for the first nine months of fiscal year 2013.

  14. Bootstrapping Process Improvement Metrics: CMMI Level 4 Process Improvement Metrics in a Level 3 World

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus; Lewicki, Scott; Morgan, Scott

    2011-01-01

    The measurement techniques for organizations which have achieved the Software Engineering Institutes CMMI Maturity Levels 4 and 5 are well documented. On the other hand, how to effectively measure when an organization is Maturity Level 3 is less well understood, especially when there is no consistency in tool use and there is extensive tailoring of the organizational software processes. Most organizations fail in their attempts to generate, collect, and analyze standard process improvement metrics under these conditions. But at JPL, NASA's prime center for deep space robotic exploration, we have a long history of proving there is always a solution: It just may not be what you expected. In this paper we describe the wide variety of qualitative and quantitative techniques we have been implementing over the last few years, including the various approaches used to communicate the results to both software technical managers and senior managers.

  15. The Use of Performance Metrics for the Assessment of Safeguards Effectiveness at the State Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachner K. M.; George Anzelon, Lawrence Livermore National Laboratory, Livermore, CA Yana Feldman, Lawrence Livermore National Laboratory, Livermore, CA Mark Goodman,Department of State, Washington, DC Dunbar Lockwood, National Nuclear Security Administration, Washington, DC Jonathan B. Sanborn, JBS Consulting, LLC, Arlington, VA.

    In the ongoing evolution of International Atomic Energy Agency (IAEA) safeguards at the state level, many safeguards implementation principles have been emphasized: effectiveness, efficiency, non-discrimination, transparency, focus on sensitive materials, centrality of material accountancy for detecting diversion, independence, objectivity, and grounding in technical considerations, among others. These principles are subject to differing interpretations and prioritizations and sometimes conflict. This paper is an attempt to develop metrics and address some of the potential tradeoffs inherent in choices about how various safeguards policy principles are implemented. The paper carefully defines effective safeguards, including in the context of safeguards approaches that take accountmore » of the range of state-specific factors described by the IAEA Secretariat and taken note of by the Board in September 2014, and (2) makes use of performance metrics to help document, and to make transparent, how safeguards implementation would meet such effectiveness requirements.« less

  16. Chicago Manufacturing Tech Prep. Fiscal Year 1991 Final Report.

    ERIC Educational Resources Information Center

    Chicago City Colleges, IL.

    During its first year of development in 1991, the Chicago Manufacturing Technical Preparation (Tech Prep) Program established a plan for implementing an industry-driven, articulated 4-year manufacturing technology course of study that integrates applied academic courses with technical courses and meets industry hiring standards. The project…

  17. Comparing Resource Adequacy Metrics and Their Influence on Capacity Value: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibanez, E.; Milligan, M.

    2014-04-01

    Traditional probabilistic methods have been used to evaluate resource adequacy. The increasing presence of variable renewable generation in power systems presents a challenge to these methods because, unlike thermal units, variable renewable generation levels change over time because they are driven by meteorological events. Thus, capacity value calculations for these resources are often performed to simple rules of thumb. This paper follows the recommendations of the North American Electric Reliability Corporation?s Integration of Variable Generation Task Force to include variable generation in the calculation of resource adequacy and compares different reliability metrics. Examples are provided using the Western Interconnection footprintmore » under different variable generation penetrations.« less

  18. Gauging Skills of Hospital Security Personnel: a Statistically-driven, Questionnaire-based Approach.

    PubMed

    Rinkoo, Arvind Vashishta; Mishra, Shubhra; Rahesuddin; Nabi, Tauqeer; Chandra, Vidha; Chandra, Hem

    2013-01-01

    This study aims to gauge the technical and soft skills of the hospital security personnel so as to enable prioritization of their training needs. A cross sectional questionnaire based study was conducted in December 2011. Two separate predesigned and pretested questionnaires were used for gauging soft skills and technical skills of the security personnel. Extensive statistical analysis, including Multivariate Analysis (Pillai-Bartlett trace along with Multi-factorial ANOVA) and Post-hoc Tests (Bonferroni Test) was applied. The 143 participants performed better on the soft skills front with an average score of 6.43 and standard deviation of 1.40. The average technical skills score was 5.09 with a standard deviation of 1.44. The study avowed a need for formal hands on training with greater emphasis on technical skills. Multivariate analysis of the available data further helped in identifying 20 security personnel who should be prioritized for soft skills training and a group of 36 security personnel who should receive maximum attention during technical skills training. This statistically driven approach can be used as a prototype by healthcare delivery institutions worldwide, after situation specific customizations, to identify the training needs of any category of healthcare staff.

  19. Gauging Skills of Hospital Security Personnel: a Statistically-driven, Questionnaire-based Approach

    PubMed Central

    Rinkoo, Arvind Vashishta; Mishra, Shubhra; Rahesuddin; Nabi, Tauqeer; Chandra, Vidha; Chandra, Hem

    2013-01-01

    Objectives This study aims to gauge the technical and soft skills of the hospital security personnel so as to enable prioritization of their training needs. Methodology A cross sectional questionnaire based study was conducted in December 2011. Two separate predesigned and pretested questionnaires were used for gauging soft skills and technical skills of the security personnel. Extensive statistical analysis, including Multivariate Analysis (Pillai-Bartlett trace along with Multi-factorial ANOVA) and Post-hoc Tests (Bonferroni Test) was applied. Results The 143 participants performed better on the soft skills front with an average score of 6.43 and standard deviation of 1.40. The average technical skills score was 5.09 with a standard deviation of 1.44. The study avowed a need for formal hands on training with greater emphasis on technical skills. Multivariate analysis of the available data further helped in identifying 20 security personnel who should be prioritized for soft skills training and a group of 36 security personnel who should receive maximum attention during technical skills training. Conclusion This statistically driven approach can be used as a prototype by healthcare delivery institutions worldwide, after situation specific customizations, to identify the training needs of any category of healthcare staff. PMID:23559904

  20. Florida Atlantic University Work Plan Presentation for 2012-13 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2012

    2012-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  1. University of North Florida Work Plan Presentation for 2012-13 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2012

    2012-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  2. Florida State University Work Plan Presentation for 2013-14 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2013

    2013-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  3. Florida International University Work Plan Presentation for 2014-15 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  4. University of Central Florida Work Plan Presentation for 2012-13 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2012

    2012-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  5. Florida Gulf Coast University Work Plan Presentation for 2014-15 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  6. USF Sarasota-Manatee Work Plan Presentation for 2014-15 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  7. Florida Polytechnic University Work Plan Presentation for 2014-15 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  8. University of North Florida Work Plan Presentation for 2014-15 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  9. Florida International University Work Plan Presentation for 2012-13 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2012

    2012-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  10. University of West Florida Work Plan, 2013-2014

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2013

    2013-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new Strategic Plan 2012-2025 is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's Annual Accountability Report provides yearly tracking for how the System is…

  11. University of North Florida Work Plan Presentation for 2013-14 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2013

    2013-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  12. Florida Gulf Coast University Work Plan Presentation for 2012-13 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2012

    2012-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  13. Florida Polytechnic University Work Plan Presentation for 2013-14 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2013

    2013-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  14. University of West Florida Work Plan Presentation for 2012-13 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2012

    2012-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  15. Florida A&M University Work Plan Presentation for 2014-15 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  16. Florida Gulf Coast University Work Plan Presentation for 2013-14 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2013

    2013-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  17. Florida Atlantic University Work Plan Presentation for 2013-14 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2013

    2013-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  18. Florida A&M University Work Plan Presentation for 2013-14 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  19. Florida Atlantic University Work Plan Presentation for 2014-15 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  20. Florida State University Work Plan Presentation for 2014-15 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  1. University of Florida Work Plan Presentation for 2014-15 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  2. Florida International University Work Plan Presentation for 2013-14 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2013

    2013-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  3. University of Central Florida Work Plan Presentation for 2014-15 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  4. New College of Florida Work Plan Presentation for 2014-15 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  5. University of Florida Work Plan Presentation for 2013-14 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2013

    2013-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  6. New College of Florida Work Plan Presentation for 2013-14 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2013

    2013-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  7. Florida State University Work Plan Presentation for 2012-13 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2012

    2012-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  8. New College of Florida Work Plan Presentation for 2012-13 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2012

    2012-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  9. University of Florida Work Plan Presentation for 2012-13 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2012

    2012-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  10. University of Central Florida Work Plan Presentation for 2013-14 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2013

    2013-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  11. Ethanol accumulation during severe drought may signal tree vulnerability to detection and attack by bark beetles

    Treesearch

    Rick G. Kelsey; D. Gallego; F.J. Sánchez-Garcia; J.A. Pajares

    2014-01-01

    Tree mortality from temperature-driven drought is occurring in forests around the world, often in conjunction with bark beetle outbreaks when carbon allocation to tree defense declines. Physiological metrics for detecting stressed trees with enhanced vulnerability prior to bark beetle attacks remain elusive. Ethanol, water, monoterpene concentrations, and composition...

  12. University of West Florida Work Plan Presentation for 2014-15 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  13. Stream amphibians as metrics of ecosystem stress: a case study from California’s redwoods revisited

    Treesearch

    Hartwell H. Welsh Jr.; Adam K. Cummings; Garth R. Hodgson

    2017-01-01

    Highway construction of the Redwood National Park bypass resulted in a storm-driven accidental infusion of exposed sediments into pristine streams in Prairie Creek Redwoods State Park, California in October 1989. We evaluated impacts of this ecosystem stress on three amphibians, larval tailed frogs (Ascaphus truei), coastal giant salamanders (

  14. System Summary of University Annual Work Plans, 2014-15

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future; (1) The Board of Governors' new Strategic Plan 2012-2025 is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's Annual Accountability Report provides yearly tracking for how the System is…

  15. Let Them See It: A Project to Build Capacity by Raising Awareness of Teaching-Development Pathways

    ERIC Educational Resources Information Center

    Bolt, Susan; Fenn, Jody; Ohly, Christian

    2016-01-01

    In an ideal world, university teaching and research would be valued equally; however, this is not currently the case. The notion that university reputation should be judged predominantly by research metrics has been challenged by a global trend towards a demand-driven system that encourages widening participation, student choice and social…

  16. 2016 System Summary of University Work Plans. Revised

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2016

    2016-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' 2025 System Strategic Plan is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's Annual Accountability Report provides yearly tracking for how the System is progressing…

  17. Riemannian Metric Optimization on Surfaces (RMOS) for Intrinsic Brain Mapping in the Laplace-Beltrami Embedding Space

    PubMed Central

    Gahm, Jin Kyu; Shi, Yonggang

    2018-01-01

    Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer’s disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. PMID:29574399

  18. Human-centric predictive model of task difficulty for human-in-the-loop control tasks

    PubMed Central

    Majewicz Fey, Ann

    2018-01-01

    Quantitatively measuring the difficulty of a manipulation task in human-in-the-loop control systems is ill-defined. Currently, systems are typically evaluated through task-specific performance measures and post-experiment user surveys; however, these methods do not capture the real-time experience of human users. In this study, we propose to analyze and predict the difficulty of a bivariate pointing task, with a haptic device interface, using human-centric measurement data in terms of cognition, physical effort, and motion kinematics. Noninvasive sensors were used to record the multimodal response of human user for 14 subjects performing the task. A data-driven approach for predicting task difficulty was implemented based on several task-independent metrics. We compare four possible models for predicting task difficulty to evaluated the roles of the various types of metrics, including: (I) a movement time model, (II) a fusion model using both physiological and kinematic metrics, (III) a model only with kinematic metrics, and (IV) a model only with physiological metrics. The results show significant correlation between task difficulty and the user sensorimotor response. The fusion model, integrating user physiology and motion kinematics, provided the best estimate of task difficulty (R2 = 0.927), followed by a model using only kinematic metrics (R2 = 0.921). Both models were better predictors of task difficulty than the movement time model (R2 = 0.847), derived from Fitt’s law, a well studied difficulty model for human psychomotor control. PMID:29621301

  19. The MDE Diploma: First International Postgraduate Specialization in Model-Driven Engineering

    ERIC Educational Resources Information Center

    Cabot, Jordi; Tisi, Massimo

    2011-01-01

    Model-Driven Engineering (MDE) is changing the way we build, operate, and maintain our software-intensive systems. Several projects using MDE practices are reporting significant improvements in quality and performance but, to be able to handle these projects, software engineers need a set of technical and interpersonal skills that are currently…

  20. Pedagogy and Japanese Culture in a Distance Learning Environment

    ERIC Educational Resources Information Center

    Anderson, Bodi O.

    2012-01-01

    Current theoretical models of distance learning are driven by two impetuses: a technical CMC element, and a pedagogical foundation rooted strongly in the Western world, and driven by social constructivism. By and large these models have been exported throughout the world as-is. However, previous research has hinted at potential problems with these…

  1. Uniform data system standardizes technical computations and the purchasing of commercially important gases

    NASA Technical Reports Server (NTRS)

    Johnson, V. J.; Mc Carty, R. D.; Roder, H. M.

    1970-01-01

    Integrated tables of pressure, volume, and temperature for the saturated liquid, from the triple point to the critical point of the gases, have been developed. Tables include definition of saturated liquid curve. Values are presented in metric and practical units. Advantages of the new tables are discussed.

  2. Comparing Two CBM Maze Selection Tools: Considering Scoring and Interpretive Metrics for Universal Screening

    ERIC Educational Resources Information Center

    Ford, Jeremy W.; Missall, Kristen N.; Hosp, John L.; Kuhle, Jennifer L.

    2016-01-01

    Advances in maze selection curriculum-based measurement have led to several published tools with technical information for interpretation (e.g., norms, benchmarks, cut-scores, classification accuracy) that have increased their usefulness for universal screening. A range of scoring practices have emerged for evaluating student performance on maze…

  3. FRAGSTATS: spatial pattern analysis program for quantifying landscape structure.

    Treesearch

    Kevin McGarigal; Barbara J. Marks

    1995-01-01

    This report describes a program, FRAGSTATS, developed to quantify landscape structure. FRAGSTATS offers a comprehensive choice of landscape metrics and was designed to be as versatile as possible. The program is almost completely automated and thus requires little technical training. Two separate versions of FRAGSTATS exist: one for vector images and one for raster...

  4. 78 FR 76289 - Request for Information To Gather Technical Expertise Pertaining to Data Elements, Metrics, Data...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-17

    ... information. To assist us in making a determination on your request, we encourage you to identify any specific... consumer decision-making. Organizations that have developed, or are developing, ratings systems for.... The Department is interested in a PIRS that takes into account information important to the Federal...

  5. INVESTIGATION OF THE POTENTIAL RELEASE OF POLYCHLORINATED DIOXINS AND FURANS FROM PCP-TREATED UTILITY POLES

    EPA Science Inventory

    The United States (US) Environmental Protection Agency (EPA) estimated that the use of technical grade pentachlorophenol (PCP) between 1970 and 1995 to treat wood was approximately 400,000 metric tons in the US, and that between 4,800 and 36,000 grams of 2,3,7,8-tetrachlorodiben...

  6. Complete to Compete: Common College Completion Metrics. Technical Guide

    ERIC Educational Resources Information Center

    Reyna, Ryan; Reindl, Travis; Witham, Keith; Stanley, Jeff

    2010-01-01

    Improved college completion rates are critical to the future of the United States, and states must have better data to understand the nature of the challenges they confront or target areas for policy change. The 2010-2011 National Governors Association (NGA) Chair's initiative, "Complete to Compete", recommends that all states collect data from…

  7. The Round Table on Computer Performance Metrics for Export Control: Discussions and Results

    DTIC Science & Technology

    1997-12-01

    eligibility, use the CTP parameter to the exclusion of other technical parameters for computers classified under ECCN 4A003.a, .b and .c, except of...parameters specified as Missile Technology (MT) concerns or 4A003.e (equipment performing analog-to-digital conversions exceeding the limits in ECCN

  8. Simulation for the training of human performance and technical skills: the intersection of how we will train health care professionals in the future.

    PubMed

    Hamman, William R; Beaubien, Jeffrey M; Beaudin-Seiler, Beth M

    2009-12-01

    The aims of this research are to begin to understand health care teams in their operational environment, establish metrics of performance for these teams, and validate a series of scenarios in simulation that elicit team and technical skills. The focus is on defining the team model that will function in the operational environment in which health care professionals work. Simulations were performed across the United States in 70- to 1000-bed hospitals. Multidisciplinary health care teams analyzed more than 300 hours of videos of health care professionals performing simulations of team-based medical care in several different disciplines. Raters were trained to enhance inter-rater reliability. The study validated event sets that trigger team dynamics and established metrics for team-based care. Team skills were identified and modified using simulation scenarios that employed the event-set-design process. Specific skills (technical and team) were identified by criticality measurement and task analysis methodology. In situ simulation, which includes a purposeful and Socratic Method of debriefing, is a powerful intervention that can overcome inertia found in clinician behavior and latent environmental systems that present a challenge to quality and patient safety. In situ simulation can increase awareness of risks, personalize the risks, and encourage the reflection, effort, and attention needed to make changes to both behaviors and to systems.

  9. Multimedia

    NASA Technical Reports Server (NTRS)

    Kaye, Karen

    1993-01-01

    Multimedia initiative objectives for the NASA Scientific and Technical Information (STI) program are described. A multimedia classification scheme was developed and the types of non-print media currently in use are inventoried. The NASA STI Program multimedia initiative is driven by a changing user population and technical requirements in the areas of publications, dissemination, and user and management support.

  10. Measuring School Effectiveness: Technical Report on the 2011 Value-Added Model. Technical Report

    ERIC Educational Resources Information Center

    National Center on Scaling Up Effective Schools, 2014

    2014-01-01

    High school dropout, enrollment, and graduation rates are important indicators of students' college and career readiness, which in turn significantly impact both individual income levels and the overall knowledge-driven economy. Despite the long-term benefits of a high school education, much of the current literature on raising school…

  11. Technical and Further Education (TAFE) Head Teachers: Their Changing Role

    ERIC Educational Resources Information Center

    Rice, Ann

    2005-01-01

    Change in Technical and Further Education (TAFE) in Australia is being driven by government initiatives that must be operationalised at the college level. Head teachers are the frontline managers in these colleges, who have to ensure their sections are responsive to the changes while also meeting the educational requirements of their traditional…

  12. Developing Knowledge Creating Technical Education Institutions through the Voice of Teachers: Content Analysis Approach

    ERIC Educational Resources Information Center

    Song, Ji Hoon; Kim, Hye Kyoung; Park, Sunyoung; Bae, Sang Hoon

    2014-01-01

    The purpose of this study was to develop an empirical data-driven model for a knowledge creation school system in career technical education (CTE) by identifying supportive and hindering factors influencing knowledge creation practices in CTE schools. Nonaka and colleagues' (Nonaka & Konno, 1998; Nonaka & Takeuchi, 1995) knowledge…

  13. Development and Application of an Integrated Approach toward NASA Airspace Systems Research

    NASA Technical Reports Server (NTRS)

    Barhydt, Richard; Fong, Robert K.; Abramson, Paul D.; Koenke, Ed

    2008-01-01

    The National Aeronautics and Space Administration's (NASA) Airspace Systems Program is contributing air traffic management research in support of the 2025 Next Generation Air Transportation System (NextGen). Contributions support research and development needs provided by the interagency Joint Planning and Development Office (JPDO). These needs generally call for integrated technical solutions that improve system-level performance and work effectively across multiple domains and planning time horizons. In response, the Airspace Systems Program is pursuing an integrated research approach and has adapted systems engineering best practices for application in a research environment. Systems engineering methods aim to enable researchers to methodically compare different technical approaches, consider system-level performance, and develop compatible solutions. Systems engineering activities are performed iteratively as the research matures. Products of this approach include a demand and needs analysis, system-level descriptions focusing on NASA research contributions, system assessment and design studies, and common systemlevel metrics, scenarios, and assumptions. Results from the first systems engineering iteration include a preliminary demand and needs analysis; a functional modeling tool; and initial system-level metrics, scenario characteristics, and assumptions. Demand and needs analysis results suggest that several advanced concepts can mitigate demand/capacity imbalances for NextGen, but fall short of enabling three-times current-day capacity at the nation s busiest airports and airspace. Current activities are focusing on standardizing metrics, scenarios, and assumptions, conducting system-level performance assessments of integrated research solutions, and exploring key system design interfaces.

  14. The Medical Education Partnership Initiative (MEPI), a collaborative paradigm for institutional and human resources capacity building between high- and low- and middle-income countries: the Mozambique experience

    PubMed Central

    Noormahomed, Emilia Virginia; Carrilho, Carla; Ismail, Mamudo; Noormahomed, Sérgio; Nguenha, Alcido; Benson, Constance A.; Mocumbi, Ana Olga; Schooley, Robert T.

    2017-01-01

    ABSTRACT Background: Collaborations among researchers based in lower and middle income countries (LMICs) and high income countries (HICs) have made major discoveries related to diseases disproportionately affecting LMICs and have been vital to the development of research communities in LMICs. Such collaborations have generally been scientifically and structurally driven by HICs. Objectives: In this report we outline a paradigm shift in collaboration, exemplified by the Medical Education Partnership Initiative (MEPI), in which the formulation of priorities and administrative infrastructure reside in the LMIC. Methods: This descriptive report outlines the critical features of the MEPI partnership. Results: In the MEPI, LMIC program partners translate broad program goals and define metrics into priorities that are tailored to local conditions. Program funds flow to a LMIC-based leadership group that contracts with peers from HICs to provide technical and scientific advice and consultation in a 'reverse funds flow' model. Emphasis is also placed on strengthening administrative capacity within LMIC institutions. A rigorous monitoring and evaluation process modifies program priorities on the basis of evolving opportunities to maximize program impact. Conclusions: Vesting LMIC partners with the responsibility for program leadership, and building administrative and fiscal capacity in LMIC institutions substantially enhances program relevance, impact and sustainability. PMID:28452653

  15. Transradial access: lessons learned from cardiology.

    PubMed

    Snelling, Brian M; Sur, Samir; Shah, Sumedh Subodh; Marlow, Megan M; Cohen, Mauricio G; Peterson, Eric C

    2018-05-01

    Innovations in interventional cardiology historically predate those in neuro-intervention. As such, studying trends in interventional cardiology can be useful in exploring avenues to optimise neuro-interventional techniques. One such cardiology innovation has been the steady conversion of arterial puncture sites from transfemoral access (TFA) to transradial access (TRA), a paradigm shift supported by safety benefits for patients. While neuro-intervention has unique anatomical challenges, the access itself is identical. As such, examining the extensive cardiology literature on the radial approach has the potential to offer valuable lessons for the neuro-interventionalist audience who may be unfamiliar with this body of work. Therefore, we present here a report, particularly for neuro-interventionalists, regarding the best practices for TRA by reviewing the relevant cardiology literature. We focused our review on the data most relevant to our audience, namely that surrounding the access itself. By reviewing the cardiology literature on metrics such as safety profiles, cost and patient satisfaction differences between TFA and TRA, as well as examining the technical nuances of the procedure and post-procedural care, we hope to give physicians treating complex cerebrovascular disease a broader data-driven understanding of TRA. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Bimanual Psychomotor Performance in Neurosurgical Resident Applicants Assessed Using NeuroTouch, a Virtual Reality Simulator.

    PubMed

    Winkler-Schwartz, Alexander; Bajunaid, Khalid; Mullah, Muhammad A S; Marwa, Ibrahim; Alotaibi, Fahad E; Fares, Jawad; Baggiani, Marta; Azarnoush, Hamed; Zharni, Gmaan Al; Christie, Sommer; Sabbagh, Abdulrahman J; Werthner, Penny; Del Maestro, Rolando F

    Current selection methods for neurosurgical residents fail to include objective measurements of bimanual psychomotor performance. Advancements in computer-based simulation provide opportunities to assess cognitive and psychomotor skills in surgically naive populations during complex simulated neurosurgical tasks in risk-free environments. This pilot study was designed to answer 3 questions: (1) What are the differences in bimanual psychomotor performance among neurosurgical residency applicants using NeuroTouch? (2) Are there exceptionally skilled medical students in the applicant cohort? and (3) Is there an influence of previous surgical exposure on surgical performance? Participants were instructed to remove 3 simulated brain tumors with identical visual appearance, stiffness, and random bleeding points. Validated tier 1, tier 2, and advanced tier 2 metrics were used to assess bimanual psychomotor performance. Demographic data included weeks of neurosurgical elective and prior operative exposure. This pilot study was carried out at the McGill Neurosurgical Simulation Research and Training Center immediately following neurosurgical residency interviews at McGill University, Montreal, Canada. All 17 medical students interviewed were asked to participate, of which 16 agreed. Performances were clustered in definable top, middle, and bottom groups with significant differences for all metrics. Increased time spent playing music, increased applicant self-evaluated technical skills, high self-ratings of confidence, and increased skin closures statistically influenced performance on univariate analysis. A trend for both self-rated increased operating room confidence and increased weeks of neurosurgical exposure to increased blood loss was seen in multivariate analysis. Simulation technology identifies neurosurgical residency applicants with differing levels of technical ability. These results provide information for studies being developed for longitudinal studies on the acquisition, development, and maintenance of psychomotor skills. Technical abilities customized training programs that maximize individual resident bimanual psychomotor training dependant on continuously updated and validated metrics from virtual reality simulation studies should be explored. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  17. Accomplishments and challenges of surgical simulation.

    PubMed

    Satava, R M

    2001-03-01

    For nearly a decade, advanced computer technologies have created extraordinary educational tools using three-dimensional (3D) visualization and virtual reality. Pioneering efforts in surgical simulation with these tools have resulted in a first generation of simulators for surgical technical skills. Accomplishments include simulations with 3D models of anatomy for practice of surgical tasks, initial assessment of student performance in technical skills, and awareness by professional societies of potential in surgical education and certification. However, enormous challenges remain, which include improvement of technical fidelity, standardization of accurate metrics for performance evaluation, integration of simulators into a robust educational curriculum, stringent evaluation of simulators for effectiveness and value added to surgical training, determination of simulation application to certification of surgical technical skills, and a business model to implement and disseminate simulation successfully throughout the medical education community. This review looks at the historical progress of surgical simulators, their accomplishments, and the challenges that remain.

  18. DEVELOPMENT OF METRICS FOR PROTOCOLS AND OTHER TECHNICAL PRODUCTS.

    PubMed

    Veiga, Daniela Francescato; Ferreira, Lydia Masako

    2015-01-01

    To develop a proposal for metrics for protocols and other technical products to be applied in assessing the Postgraduate Programs of Medicine III - Capes. The 2013 area documents of all the 48 Capes areas were read. From the analysis of the criteria used by the areas at the 2013's Triennal Assessment, a proposal for metrics for protocols and other technical products was developed to be applied in assessing the Postgraduate Programs of Medicine III. This proposal was based on the criteria of Biological Sciences I and Interdisciplinary areas. Only seven areas have described a scoring system for technical products. The products considered and the scoring varied widely. Due to the wide range of different technical products which could be considered relevant, and that would not be punctuated if they were not previously specified, it was developed, for the Medicine III, a proposal for metrics in which five specific criteria to be analyzed: Demand, Relevance/Impact, Scope, Complexity and Adherence to the Program. Based on these criteria, each product can receive 10 to 100 points. This proposal can be applied to the item Intellectual Production of the evaluation form, in subsection "Technical production, patents and other relevant production". The program will be scored as Very Good when it reaches mean ≥150 points/permanent professor/quadrennium; Good, mean between 100 and 149 points; Regular, mean between 60 and 99 points; Weak mean between 30 and 59 points; Insufficient, up to 29 points/permanent professor/quadrennium. Desenvolver proposta de métricas para protocolos e outras produções técnicas a serem aplicadas na avaliação dos Programas de Pós-Graduação da Área Medicina III da Capes. Foram lidos os documentos de área de 2013 de todas as 48 Áreas da Capes. A partir da análise dos critérios utilizados por elas na avaliação trienal 2013, foi desenvolvida uma proposta de métricas para protocolos e outras produções técnicas. Esta proposta foi baseada nos critérios adotados pelas Áreas Ciências Biológicas I e Interdisciplinar. Apenas sete áreas descreveram sistema de pontuação para produtos técnicos, e as produções consideradas e a pontuação variaram amplamente. Dada à imensa gama de produções técnicas diferentes que podem ser consideradas relevantes, e que não seriam contempladas em sistema de pontuação caso fossem especificadas, foi desenvolvida para a Medicina III uma proposta de métricas em que são analisados cinco critérios específicos para cada produção: Demanda, Relevância/Impacto, Abrangência, Complexidade e Aderência ao Programa. Com base nestes critérios, cada produção pode receber de 10 a 100 pontos. Esta proposta poderá ser aplicada ao item Produção Intelectual da Ficha de Avaliação, subitem "Produção técnica, patentes e outras produções consideradas relevantes". Será considerado Muito Bom o Programa que obtiver média ≥150 pontos/docente permanente/quadriênio; Bom, média entre 100 e 149 pontos; Regular, média entre 60 e 99 pontos; Fraco, média entre 30 e 59 pontos; e Deficiente média ≤29 pontos/docente permanente/quadriênio.

  19. A Decision Fusion Framework for Treatment Recommendation Systems.

    PubMed

    Mei, Jing; Liu, Haifeng; Li, Xiang; Xie, Guotong; Yu, Yiqin

    2015-01-01

    Treatment recommendation is a nontrivial task--it requires not only domain knowledge from evidence-based medicine, but also data insights from descriptive, predictive and prescriptive analysis. A single treatment recommendation system is usually trained or modeled with a limited (size or quality) source. This paper proposes a decision fusion framework, combining both knowledge-driven and data-driven decision engines for treatment recommendation. End users (e.g. using the clinician workstation or mobile apps) could have a comprehensive view of various engines' opinions, as well as the final decision after fusion. For implementation, we leverage several well-known fusion algorithms, such as decision templates and meta classifiers (of logistic and SVM, etc.). Using an outcome-driven evaluation metric, we compare the fusion engine with base engines, and our experimental results show that decision fusion is a promising way towards a more valuable treatment recommendation.

  20. Broad phonetic class definition driven by phone confusions

    NASA Astrophysics Data System (ADS)

    Lopes, Carla; Perdigão, Fernando

    2012-12-01

    Intermediate representations between the speech signal and phones may be used to improve discrimination among phones that are often confused. These representations are usually found according to broad phonetic classes, which are defined by a phonetician. This article proposes an alternative data-driven method to generate these classes. Phone confusion information from the analysis of the output of a phone recognition system is used to find clusters at high risk of mutual confusion. A metric is defined to compute the distance between phones. The results, using TIMIT data, show that the proposed confusion-driven phone clustering method is an attractive alternative to the approaches based on human knowledge. A hierarchical classification structure to improve phone recognition is also proposed using a discriminative weight training method. Experiments show improvements in phone recognition on the TIMIT database compared to a baseline system.

  1. Substrate-Driven Mapping of the Degradome by Comparison of Sequence Logos

    PubMed Central

    Fuchs, Julian E.; von Grafenstein, Susanne; Huber, Roland G.; Kramer, Christian; Liedl, Klaus R.

    2013-01-01

    Sequence logos are frequently used to illustrate substrate preferences and specificity of proteases. Here, we employed the compiled substrates of the MEROPS database to introduce a novel metric for comparison of protease substrate preferences. The constructed similarity matrix of 62 proteases can be used to intuitively visualize similarities in protease substrate readout via principal component analysis and construction of protease specificity trees. Since our new metric is solely based on substrate data, we can engraft the protease tree including proteolytic enzymes of different evolutionary origin. Thereby, our analyses confirm pronounced overlaps in substrate recognition not only between proteases closely related on sequence basis but also between proteolytic enzymes of different evolutionary origin and catalytic type. To illustrate the applicability of our approach we analyze the distribution of targets of small molecules from the ChEMBL database in our substrate-based protease specificity trees. We observe a striking clustering of annotated targets in tree branches even though these grouped targets do not necessarily share similarity on protein sequence level. This highlights the value and applicability of knowledge acquired from peptide substrates in drug design of small molecules, e.g., for the prediction of off-target effects or drug repurposing. Consequently, our similarity metric allows to map the degradome and its associated drug target network via comparison of known substrate peptides. The substrate-driven view of protein-protein interfaces is not limited to the field of proteases but can be applied to any target class where a sufficient amount of known substrate data is available. PMID:24244149

  2. University of South Florida St. Petersburg Work Plan Presentation for 2013-14 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2013

    2013-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  3. University of South Florida--System Work Plan Presentation for 2012-13 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2012

    2012-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  4. University of South Florida Sarasota-Manatee Work Plan Presentation for 2013-14 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2013

    2013-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  5. University of South Florida Tampa Work Plan Presentation for 2013-14 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2013

    2013-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  6. University of South Florida System Work Plan Presentation for 2014-15 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  7. University of South Florida St. Petersburg Work Plan Presentation for 2014-15 Board of Governors Review

    ERIC Educational Resources Information Center

    Board of Governors, State University System of Florida, 2014

    2014-01-01

    The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…

  8. The Progression of Podcasting/Vodcasting in a Technical Physics Class

    NASA Astrophysics Data System (ADS)

    Glanville, Y. J.

    2010-11-01

    Technology such as Microsoft PowerPoint presentations, clickers, podcasting, and learning management suites is becoming prevalent in classrooms. Instructors are using these media in both large lecture hall settings and small classrooms with just a handful of students. Traditionally, each of these media is instructor driven. For instance, podcasting (audio recordings) provided my technical physics course with supplemental notes to accompany a traditional algebra-based physics lecture. Podcasting is an ideal tool for this mode of instruction, but podcasting/vodcasting is also an ideal technique for student projects and student-driven learning. I present here the various podcasting/vodcasting projects my students and I have undertaken over the last few years.

  9. Data-Driven Learning: Taking the Computer out of the Equation

    ERIC Educational Resources Information Center

    Boulton, Alex

    2010-01-01

    Despite considerable research interest, data-driven learning (DDL) has not become part of mainstream teaching practice. It may be that technical aspects are too daunting for teachers and students, but there seems to be no reason why DDL in its early stages should not eliminate the computer from the equation by using prepared materials on…

  10. Health impact metrics for air pollution management strategies

    PubMed Central

    Martenies, Sheena E.; Wilkins, Donele; Batterman, Stuart A.

    2015-01-01

    Health impact assessments (HIAs) inform policy and decision making by providing information regarding future health concerns, and quantitative HIAs now are being used for local and urban-scale projects. HIA results can be expressed using a variety of metrics that differ in meaningful ways, and guidance is lacking with respect to best practices for the development and use of HIA metrics. This study reviews HIA metrics pertaining to air quality management and presents evaluative criteria for their selection and use. These are illustrated in a case study where PM2.5 concentrations are lowered from 10 to 8 µg/m3 in an urban area of 1.8 million people. Health impact functions are used to estimate the number of premature deaths, unscheduled hospitalizations and other morbidity outcomes. The most common metric in recent quantitative HIAs has been the number of cases of adverse outcomes avoided. Other metrics include time-based measures, e.g., disability-adjusted life years (DALYs), monetized impacts, functional-unit based measures, e.g., benefits per ton of emissions reduced, and other economic indicators, e.g., cost-benefit ratios. These metrics are evaluated by considering their comprehensiveness, the spatial and temporal resolution of the analysis, how equity considerations are facilitated, and the analysis and presentation of uncertainty. In the case study, the greatest number of avoided cases occurs for low severity morbidity outcomes, e.g., asthma exacerbations (n=28,000) and minor-restricted activity days (n=37,000); while DALYs and monetized impacts are driven by the severity, duration and value assigned to a relatively low number of premature deaths (n=190 to 230 per year). The selection of appropriate metrics depends on the problem context and boundaries, the severity of impacts, and community values regarding health. The number of avoided cases provides an estimate of the number of people affected, and monetized impacts facilitate additional economic analyses useful to policy analysis. DALYs are commonly used as an aggregate measure of health impacts and can be used to compare impacts across studies. Benefits per ton metrics may be appropriate when changes in emissions rates can be estimated. To address community concerns and HIA objectives, a combination of metrics is suggested. PMID:26372694

  11. Quantification of Behavioral Stereotypy in Flies

    NASA Astrophysics Data System (ADS)

    Manley, Jason; Berman, Gordon; Shaevitz, Joshua

    A commonly accepted assumption in the study of behavior is that an organism's behavioral repertoire can be represented by a relatively small set of stereotyped actions. Here, ``stereotypy'' is defined as a measure of the similarity of repetitions of a behavior. Our group utilizes data-driven analyses on videos of ground-based Drosophila to organize the set of spontaneous behaviors into a two-dimensional map, or behavioral space. We utilize this framework to define a metric for behavioral stereotypy. This measure quantifies the variance in a given behavior's periodic trajectory through a space representing its postural degrees of freedom. This newly developed behavioral metric has confirmed a high degree of stereotypy among most behaviors and we correlate stereotypy with various physiological effects.

  12. Riemannian metric optimization on surfaces (RMOS) for intrinsic brain mapping in the Laplace-Beltrami embedding space.

    PubMed

    Gahm, Jin Kyu; Shi, Yonggang

    2018-05-01

    Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer's disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. 77 FR 54648 - Seventh Meeting: RTCA NextGen Advisory Committee (NAC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-05

    ...' license/State-issued ID Card Number and State of Issuance Company Phone number contact Non-U.S. Citizens... can be used for NextGen Metrics Data Sources for Measuring NextGen Fuel Impact A discussion of a preliminary report on a critical data source to track and analyze the impact of NextGen Non-Technical Barriers...

  14. Technical interventions to increase adenoma detection rate in colonoscopy.

    PubMed

    Rondonotti, Emanuele; Andrealli, Alida; Amato, Arnaldo; Paggi, Silvia; Conti, Clara Benedetta; Spinzi, Giancarlo; Radaelli, Franco

    2016-12-01

    Adenoma detection rate (ADR) is the most robust colonoscopy quality metric and clinical studies have adopted it as the ideal method to assess the impact of technical interventions. Areas covered: We reviewed papers focusing on the impact of colonoscopy technical issues on ADR, including withdrawal time and technique, second evaluation of the right colon, patient positional changes, gastrointestinal assistant participation during colonoscopy, water-aided technique, optimization of bowel preparation and antispasmodic administration. Expert commentary: Overall, technical interventions are inexpensive, available worldwide and easy to implement. Some of them, such as the adoption of split dose regimen and slow scope withdrawal to allow a careful inspection, have been demonstrated to significantly improve ADR. Emerging data support the use of water-exchange colonoscopy. According to published studies, other technical interventions seem to provide only marginal benefit to ADR. Unfortunately, the available evidence has methodological limitations, such as small sample sizes, the inclusion of expert endoscopists only and the evaluation of single technical interventions. Additionally, larger studies are needed to clarify whether these interventions might have a higher benefit on low adenoma detectors and whether the implementation of a bundle of them, instead of a single technical maneuver, might have a greater impact on ADR.

  15. The Devil is in the Details: Using X-Ray Computed Tomography to Develop Accurate 3D Grain Characteristics and Bed Structure Metrics for Gravel Bed Rivers

    NASA Astrophysics Data System (ADS)

    Voepel, H.; Hodge, R. A.; Leyland, J.; Sear, D. A.; Ahmed, S. I.

    2014-12-01

    Uncertainty for bedload estimates in gravel bed rivers is largely driven by our inability to characterize the arrangement and orientation of the sediment grains within the bed. The characteristics of the surface structure are produced by the water working of grains, which leads to structural differences in bedforms through differential patterns of grain sorting, packing, imbrication, mortaring and degree of bed armoring. Until recently the technical and logistical difficulties of characterizing the arrangement of sediment in 3D have prohibited a full understanding of how grains interact with stream flow and the feedback mechanisms that exist. Micro-focus X-ray CT has been used for non-destructive 3D imaging of grains within a series of intact sections of river bed taken from key morphological units (see Figure 1). Volume, center of mass, points of contact, protrusion and spatial orientation of individual surface grains are derived from these 3D images, which in turn, facilitates estimates of 3D static force properties at the grain-scale such as pivoting angles, buoyancy and gravity forces, and grain exposure. By aggregating representative samples of grain-scale properties of localized interacting sediment into overall metrics, we can compare and contrast bed stability at a macro-scale with respect to stream bed morphology. Understanding differences in bed stability through representative metrics derived at the grain-scale will ultimately lead to improved bedload estimates with reduced uncertainty and increased understanding of interactions between grain-scale properties on channel morphology. Figure 1. CT-Scans of a water worked gravel-filled pot. a. 3D rendered scan showing the outer mesh, and b. the same pot with the mesh removed. c. vertical change in porosity of the gravels sampled in 5mm volumes. Values are typical of those measured in the field and lab. d. 2-D slices through the gravels at 20% depth from surface (porosity = 0.35), and e. 75% depth from surface (porosity = 0.24), showing the presence of fine sediments 'mortaring' the larger gravels. f. shows a longitudinal slide from which pivot angle measurements can be determined for contact points between particles. g. Example of two particle extraction from the CT scan showing how particle contact areas can be measured (dark area).

  16. Metrics for Energy Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul E. Roege; Zachary A. Collier; James Mancillas

    2014-09-01

    Energy lies at the backbone of any advanced society and constitutes an essential prerequisite for economic growth, social order and national defense. However there is an Achilles heel to today?s energy and technology relationship; namely a precarious intimacy between energy and the fiscal, social, and technical systems it supports. Recently, widespread and persistent disruptions in energy systems have highlighted the extent of this dependence and the vulnerability of increasingly optimized systems to changing conditions. Resilience is an emerging concept that offers to reconcile considerations of performance under dynamic environments and across multiple time frames by supplementing traditionally static system performancemore » measures to consider behaviors under changing conditions and complex interactions among physical, information and human domains. This paper identifies metrics useful to implement guidance for energy-related planning, design, investment, and operation. Recommendations are presented using a matrix format to provide a structured and comprehensive framework of metrics relevant to a system?s energy resilience. The study synthesizes previously proposed metrics and emergent resilience literature to provide a multi-dimensional model intended for use by leaders and practitioners as they transform our energy posture from one of stasis and reaction to one that is proactive and which fosters sustainable growth.« less

  17. Uncertainty quantification metrics for whole product life cycle cost estimates in aerospace innovation

    NASA Astrophysics Data System (ADS)

    Schwabe, O.; Shehab, E.; Erkoyuncu, J.

    2015-08-01

    The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.

  18. Security Vulnerability and Patch Management in Electric Utilities: A Data-Driven Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Qinghua; Zhang, Fengli

    This paper explores a real security vulnerability and patch management dataset from an electric utility in order to shed light on characteristics of the vulnerabilities that electric utility assets have and how they are remediated in practice. Specifically, it first analyzes the distribution of vulnerabilities over software, assets, and other metric. Then it analyzes how vulnerability features affect remediate actions.

  19. Measuring the software process and product: Lessons learned in the SEL

    NASA Technical Reports Server (NTRS)

    Basili, V. R.

    1985-01-01

    The software development process and product can and should be measured. The software measurement process at the Software Engineering Laboratory (SEL) has taught a major lesson: develop a goal-driven paradigm (also characterized as a goal/question/metric paradigm) for data collection. Project analysis under this paradigm leads to a design for evaluating and improving the methodology of software development and maintenance.

  20. Efficiency disparities among community hospitals in Tennessee: do size, location, ownership, and network matter?

    PubMed

    Roh, Chul-Young; Moon, M Jae; Jung, Kwangho

    2013-11-01

    This study examined the impact of ownership, size, location, and network on the relative technical efficiency of community hospitals in Tennessee for the 2002-2006 period, by applying data envelopment analysis (DEA) to measure technical efficiency (decomposed into scale efficiency and pure technical efficiency). Data envelopment analysis results indicate that medium-size hospitals (126-250 beds) are more efficient than their counterparts. Interestingly, public hospitals are significantly more efficient than private and nonprofit hospitals in Tennessee, and rural hospitals are more efficient than urban hospitals. This is the first study to investigate whether hospital networks with other health care providers affect hospital efficiency. Results indicate that community hospitals with networks are more efficient than non-network hospitals. From a management and policy perspective, this study suggests that public policies should induce hospitals to downsize or upsize into optional size, and private hospitals and nonprofit hospitals should change their organizational objectives from profit-driven to quality-driven.

  1. Development and Application of Skill Standards for Security Practitioners

    DTIC Science & Technology

    2006-07-01

    Development and Application of Skill Standards for Security Practitioners Henry K. Simpson Northrop Grumman Technical Services Lynn F. Fischer...and Application of Skill Standards for Security Practitioners Henry K. Simpson, Northrop Grumman Technical Services Lynn F. Fischer, Defense...described in the present report was driven by a JSTC tasking to develop skill standards for security practitioners in seven different security

  2. Basic Concepts and Definitions for Privacy and Confidentiality in Student Education Records. SLDS Technical Brief 1. NCES 2011-601

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2010

    2010-01-01

    The National Center for Education Statistics (NCES) is launching a new series of Technical Briefs on various aspects of the protection of personally identifiable information in students' education records. While driven by recent events, the principles and practices that are outlined in this series can be applied more generally to personally…

  3. Measuring mental disorders: The failed commensuration project of DSM-5.

    PubMed

    Whooley, Owen

    2016-10-01

    Commensuration - the comparison of entities according to a common quantitative metric - is a key process in efforts to rationalize medicine. The push toward evidence-based medicine and quantitative assessment has led to the proliferation of metrics in healthcare. While social scientific attention has revealed the effects of these metrics once institutionalized - on clinical practice, on medical expertise, on outcome assessment, on valuations of medical services, and on experiences of illness - less attention has been paid to the process of developing metrics. This article examines the attempt to create severity scales during the revision to the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) as a case of failed commensuration. Using data from interviews with participants in the DSM-5 revision (n = 30), I reconstruct the problems that emerged in the DSM-5 Task Force's effort to develop viable psychometric instruments to measure severity. Framed as a part of a "paradigm shift" in psychiatry, the revision produced ad hoc, heterogeneous severity scales with divergent logics. I focus on two significant issues of metric construction in this case - diagnostic validity and clinical utility. Typically perceived as technical and conceptual challenges of design, I show how these issues were infused with, and undermined by, professional political dynamics, specifically tensions between medical researchers and clinicians. This case reveals that, despite its association with objectivity and transparency, commensuration encompasses more than identifying, operationalizing, and measuring an entity; it demands the negotiation of extra-scientific, non-empirical concerns that get written into medical metrics themselves. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Mutual Information in Frequency and Its Application to Measure Cross-Frequency Coupling in Epilepsy

    NASA Astrophysics Data System (ADS)

    Malladi, Rakesh; Johnson, Don H.; Kalamangalam, Giridhar P.; Tandon, Nitin; Aazhang, Behnaam

    2018-06-01

    We define a metric, mutual information in frequency (MI-in-frequency), to detect and quantify the statistical dependence between different frequency components in the data, referred to as cross-frequency coupling and apply it to electrophysiological recordings from the brain to infer cross-frequency coupling. The current metrics used to quantify the cross-frequency coupling in neuroscience cannot detect if two frequency components in non-Gaussian brain recordings are statistically independent or not. Our MI-in-frequency metric, based on Shannon's mutual information between the Cramer's representation of stochastic processes, overcomes this shortcoming and can detect statistical dependence in frequency between non-Gaussian signals. We then describe two data-driven estimators of MI-in-frequency: one based on kernel density estimation and the other based on the nearest neighbor algorithm and validate their performance on simulated data. We then use MI-in-frequency to estimate mutual information between two data streams that are dependent across time, without making any parametric model assumptions. Finally, we use the MI-in- frequency metric to investigate the cross-frequency coupling in seizure onset zone from electrocorticographic recordings during seizures. The inferred cross-frequency coupling characteristics are essential to optimize the spatial and spectral parameters of electrical stimulation based treatments of epilepsy.

  5. Technical information report: Plasma melter operation, reliability, and maintenance analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrickson, D.W.

    1995-03-14

    This document provides a technical report of operability, reliability, and maintenance of a plasma melter for low-level waste vitrification, in support of the Hanford Tank Waste Remediation System (TWRS) Low-Level Waste (LLW) Vitrification Program. A process description is provided that minimizes maintenance and downtime and includes material and energy balances, equipment sizes and arrangement, startup/operation/maintence/shutdown cycle descriptions, and basis for scale-up to a 200 metric ton/day production facility. Operational requirements are provided including utilities, feeds, labor, and maintenance. Equipment reliability estimates and maintenance requirements are provided which includes a list of failure modes, responses, and consequences.

  6. Towards more effective robotic gait training for stroke rehabilitation: a review

    PubMed Central

    2012-01-01

    Background Stroke is the most common cause of disability in the developed world and can severely degrade walking function. Robot-driven gait therapy can provide assistance to patients during training and offers a number of advantages over other forms of therapy. These potential benefits do not, however, seem to have been fully realised as of yet in clinical practice. Objectives This review determines ways in which robot-driven gait technology could be improved in order to achieve better outcomes in gait rehabilitation. Methods The literature on gait impairments caused by stroke is reviewed, followed by research detailing the different pathways to recovery. The outcomes of clinical trials investigating robot-driven gait therapy are then examined. Finally, an analysis of the literature focused on the technical features of the robot-based devices is presented. This review thus combines both clinical and technical aspects in order to determine the routes by which robot-driven gait therapy could be further developed. Conclusions Active subject participation in robot-driven gait therapy is vital to many of the potential recovery pathways and is therefore an important feature of gait training. Higher levels of subject participation and challenge could be promoted through designs with a high emphasis on robotic transparency and sufficient degrees of freedom to allow other aspects of gait such as balance to be incorporated. PMID:22953989

  7. Solutions for Coding Societal Events

    DTIC Science & Technology

    2016-12-01

    develop a prototype system for civil unrest event extraction, and (3) engineer BBN ACCENT (ACCurate Events from Natural Text ) to support broad use by...56 iv List of Tables Table 1: Features in similarity metric. Abbreviations are as follows. TG: text graph...extraction of a stream of events (e.g. protests, attacks, etc.) from unstructured text (e.g. news, social media). This technical report presents results

  8. You Can't Kill a Wasp with a Postage Stamp, or How to Teach 'Em to Pass Element Nine.

    ERIC Educational Resources Information Center

    Harden, Heather

    For student radio broadcasters to acquire a third class operators permit, they must pass Element 9 of the Federal Communications Commission exam. A course was designed to help these amateurs acquire such technical competencies as meter reading, metric conversions, and familiarity with directional antennas. This course description includes a list…

  9. Developing the Systems Engineering Experience Accelerator (SEEA) Prototype and Roadmap

    DTIC Science & Technology

    2012-10-24

    system attributes. These metrics track non-requirements performance, typically relate to production cost per unit, maintenance costs, training costs...immediately implement lessons learned from the training experience to the job, assuming the culture allows this. 1.3 MANAGEMENT PLAN/TECHNICAL OVERVIEW...resolving potential conflicts as they arise. Incrementally implement and continuously integrate capability in priority order, to ensure that final system

  10. Metrics, Dollars, and Systems Change: Learning from Washington State's Student Achievement Initiative to Design Effective Postsecondary Performance Funding Policies. A State Policy Brief

    ERIC Educational Resources Information Center

    Jenkins, Davis; Shulock, Nancy

    2013-01-01

    The Student Achievement Initiative (SAI), adopted by the Washington State Board for Community and Technical Colleges in 2007, is one of a growing number of performance funding programs that have been dubbed "performance funding 2.0." Unlike previous performance funding models, the SAI rewards colleges for students' intermediate…

  11. Machinery health prognostics: A systematic review from data acquisition to RUL prediction

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Li, Naipeng; Guo, Liang; Li, Ningbo; Yan, Tao; Lin, Jing

    2018-05-01

    Machinery prognostics is one of the major tasks in condition based maintenance (CBM), which aims to predict the remaining useful life (RUL) of machinery based on condition information. A machinery prognostic program generally consists of four technical processes, i.e., data acquisition, health indicator (HI) construction, health stage (HS) division, and RUL prediction. Over recent years, a significant amount of research work has been undertaken in each of the four processes. And much literature has made an excellent overview on the last process, i.e., RUL prediction. However, there has not been a systematic review that covers the four technical processes comprehensively. To fill this gap, this paper provides a review on machinery prognostics following its whole program, i.e., from data acquisition to RUL prediction. First, in data acquisition, several prognostic datasets widely used in academic literature are introduced systematically. Then, commonly used HI construction approaches and metrics are discussed. After that, the HS division process is summarized by introducing its major tasks and existing approaches. Afterwards, the advancements of RUL prediction are reviewed including the popular approaches and metrics. Finally, the paper provides discussions on current situation, upcoming challenges as well as possible future trends for researchers in this field.

  12. The fractured landscape of RNA-seq alignment: the default in our STARs.

    PubMed

    Ballouz, Sara; Dobin, Alexander; Gingeras, Thomas R; Gillis, Jesse

    2018-06-01

    Many tools are available for RNA-seq alignment and expression quantification, with comparative value being hard to establish. Benchmarking assessments often highlight methods' good performance, but are focused on either model data or fail to explain variation in performance. This leaves us to ask, what is the most meaningful way to assess different alignment choices? And importantly, where is there room for progress? In this work, we explore the answers to these two questions by performing an exhaustive assessment of the STAR aligner. We assess STAR's performance across a range of alignment parameters using common metrics, and then on biologically focused tasks. We find technical metrics such as fraction mapping or expression profile correlation to be uninformative, capturing properties unlikely to have any role in biological discovery. Surprisingly, we find that changes in alignment parameters within a wide range have little impact on both technical and biological performance. Yet, when performance finally does break, it happens in difficult regions, such as X-Y paralogs and MHC genes. We believe improved reporting by developers will help establish where results are likely to be robust or fragile, providing a better baseline to establish where methodological progress can still occur.

  13. Value of information in natural resource management: technical developments and application to pink-footed geese

    USGS Publications Warehouse

    Williams, Byron K.; Johnson, Fred A.

    2015-01-01

    The “value of information” (VOI) is a generic term for the increase in value resulting from better information to guide management, or alternatively, the value foregone under uncertainty about the impacts of management (Yokota and Thompson, Medical Decision Making 2004;24: 287). The value of information can be characterized in terms of several metrics, including the expected value of perfect information and the expected value of partial information. We extend the technical framework for the value of information by further developing the relationship between value metrics for partial and perfect information and describing patterns of their performance. We use two different expressions for the expected value of partial information to highlight its relationship to the expected value of perfect information. We also develop the expected value of partial information for hierarchical uncertainties. We highlight patterns in the value of information for the Svalbard population of the pink-footed goose (Anser brachyrhynchus), a population that is subject to uncertainty in both reproduction and survival functions. The framework for valuing information is seen as having widespread potential in resource decision making, and serves as a motivation for resource monitoring, assessment, and collaboration.

  14. Application of Support Vector Machine to Forex Monitoring

    NASA Astrophysics Data System (ADS)

    Kamruzzaman, Joarder; Sarker, Ruhul A.

    Previous studies have demonstrated superior performance of artificial neural network (ANN) based forex forecasting models over traditional regression models. This paper applies support vector machines to build a forecasting model from the historical data using six simple technical indicators and presents a comparison with an ANN based model trained by scaled conjugate gradient (SCG) learning algorithm. The models are evaluated and compared on the basis of five commonly used performance metrics that measure closeness of prediction as well as correctness in directional change. Forecasting results of six different currencies against Australian dollar reveal superior performance of SVM model using simple linear kernel over ANN-SCG model in terms of all the evaluation metrics. The effect of SVM parameter selection on prediction performance is also investigated and analyzed.

  15. Identification of User Facility Related Publications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, Robert M; Stahl, Christopher G; Wells, Jack C

    2012-01-01

    Scientific user facilities provide physical resources and technical support that enable scientists to conduct experiments or simulations pertinent to their respective research. One metric for evaluating the scientific value or impact of a facility is the number of publications by users as a direct result of using that facility. Unfortunately, for a variety of reasons, capturing accurate values for this metric proves time consuming and error-prone. This work describes a new approach that leverages automated browser technology combined with text analytics to reduce the time and error involved in identifying publications related to user facilities. With this approach, scientific usermore » facilities gain more accurate measures of their impact as well as insight into policy revisions for user access.« less

  16. Sustainable hydropower in Lower Mekong Countries: Technical assessment and training travel report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadjerioua, Boualem; Witt, Adam M.

    The U.S. Agency for International Development (USAID), through their partnership with the U.S. Department of the Interior (DOI), requested the support of Oak Ridge National Laboratory (ORNL) to provide specialized technical assistance as part of the Smart Infrastructure for the Mekong (SIM) Program in Thailand. Introduced in July 2013 by U.S. Secretary of State John Kerry, SIM is a U.S. Government Inter-Agency program that provides Lower Mekong partner countries with targeted, demand-driven technical and scientific assistance to support environmentally sound, climate conscious and socially equitable infrastructure, clean energy development, and water resources optimization. The U.S. Government is committed to supportingmore » sustainable economic development within the region by providing tools, best practices, technical assistance, and lessons learned for the benefit of partner countries. In response to a request from the Electricity Generating Authority of Thailand (EGAT), a SIM project was developed with two main activities: 1) to promote hydropower sustainability and efficiency through technical assessment training at two existing hydropower assets in Thailand, and 2) the design and implementation of one national and two or three regional science and policy workshops, to be co-hosted with EGAT, to build common understanding of and commitment to environmental and social safeguards for Mekong Basin hydropower projects. The U.S. Department of Energy (DOE) is leading the technical assessment (Activity 1), and has contracted ORNL to provide expert technical assistance focused on increasing efficiency at existing projects, with the goal of increasing renewable energy generation at little to no capital cost. ORNL is the leading national laboratory in hydropower analysis, with a nationally recognized and highly qualified team of scientists addressing small to large-scale systems (basin-, regional-, and national-scale) energy generation optimization analysis for DOE. The mission of the ORNL Water Power Program is to develop technologies, decision-support tools, and methods of analysis that enable holistic management of water-dependent energy infrastructure and natural resources in support of the DOE Energy Efficiency and Renewable Energy Office (DOE-EERE), Federal hydropower agencies, Federal Energy Regulatory Commission (FERC), Nuclear Regulatory Commission (NRC), energy producers, and other entities. In support of SIM, ORNL completed technical assessments of two hydropower plants owned and operated by the Electricity Generating Authority of Thailand (EGAT): Vajiralongkorn (VRK), with an installed capacity of 300 MW, and Rajjaprabha (RPB), with an installed capacity of 240MW. Technical assessment is defined as the assessment of hydropower operation and performance, and the identification of potential opportunities for performance improvement through plant optimization. At each plant, the assessment included an initial analysis of hydropower operating and performance metrics, provided by dam owners. After this analysis, ORNL engaged with the plant management team in a skills exchange, where best practices, operational methods, and technical challenges were discussed. The technical assessment process was outlined to plant management followed by a presentation of preliminary results and analysis based on 50 days of operational data. EGAT has agreed to provide a full year of operational data so a complete and detailed assessment that captures seasonal variability can be completed. The results of these assessments and discussions will be used to develop a set of best practices, training, and procedure recommendations to improve the efficiency of the two assessed plants« less

  17. Physiological correlates of mental workload

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.

    1980-01-01

    A literature review was conducted to assess the basis of and techniques for physiological assessment of mental workload. The study findings reviewed had shortcomings involving one or more of the following basic problems: (1) physiologic arousal can be easily driven by nonworkload factors, confounding any proposed metric; (2) the profound absence of underlying physiologic models has promulgated a multiplicity of seemingly arbitrary signal processing techniques; (3) the unspecified multidimensional nature of physiological "state" has given rise to a broad spectrum of competing noncommensurate metrics; and (4) the lack of an adequate definition of workload compels physiologic correlations to suffer either from the vagueness of implicit workload measures or from the variance of explicit subjective assessments. Using specific studies as examples, two basic signal processing/data reduction techniques in current use, time and ensemble averaging are discussed.

  18. Bar Mode Instability in Relativistic Rotating Stars: A Post-Newtonian Treatment

    NASA Astrophysics Data System (ADS)

    Shapiro, Stuart L.; Zane, Silvia

    1998-08-01

    We construct analytic models of incompressible, uniformly rotating stars in post-Newtonian (PN) gravity and evaluate their stability against nonaxisymmetric bar modes. We model the PN configurations by homogeneous triaxial ellipsoids and employ an energy variational principle to determine their equilibrium shape and stability. The spacetime metric is obtained by solving Einstein's equations of general relativity in 3 + 1 ADM form. We use an approximate subset of these equations well suited to numerical integration in the case of strong-field, three-dimensional configurations in quasi equilibrium. However, the adopted equations are exact at PN order, where they admit an analytic solution for homogeneous ellipsoids. We obtain this solution for the metric, as well as analytic functionals for the conserved global quantities, M, M0, and J. We present sequences of axisymmetric, rotating equilibria of constant density and rest mass parametrized by their eccentricity. These configurations represent the PN generalization of Newtonian Maclaurin spheroids, which we compare to other PN and full relativistic incompressible equilibrium sequences constructed by previous investigators. We employ the variational principle to consider nonaxisymmetric ellipsoidal deformations of the configurations, holding the angular momentum constant and the rotation uniform. We locate the point along each sequence at which these Jacobi-like bar modes will be driven secularly unstable by the presence of a dissipative agent such as viscosity. We find that the value of the eccentricity, as well as related ratios such as Ω2/(πρ0) and T/|W| (=rotational kinetic energy/gravitational potential energy), defined invariantly, all increase at the onset of instability as the stars become more relativistic. Since higher degrees of rotation are required to trigger a viscosity-driven bar mode instability as the stars become more compact, the effect of general relativity is to weaken the instability, at least to PN order. This behavior is in stark contrast to that found recently for secular instability via nonaxisymmetric, Dedekind-like modes driven by gravitational radiation. These findings support the suggestion that in general relativity nonaxisymmetric modes driven unstable by viscosity no longer coincide with those driven unstable by gravitational radiation.

  19. Gas-heat-pump development

    NASA Astrophysics Data System (ADS)

    Creswick, F. A.

    Incentives for the development of gas heat pumps are discussed. Technical progress made on several promising technologies was reviewed. The status of development of gas-engine-driven heat pumps, the absorption cycle for the near- and long-term gas heat pump systems, the Stirling engine, the small Rankine-cycle engines, and gas-turbine-driven heat pump systems were briefly reviewed. Progress in the US, Japan, and Europe is noted.

  20. Categorization of hyperspectral information (HSI) based on the distribution of spectra in hyperspace

    NASA Astrophysics Data System (ADS)

    Resmini, Ronald G.

    2003-09-01

    Hyperspectral information (HSI) data are commonly categorized by a description of the dominant physical geographic background captured in the image cube. In other words, HSI categorization is commonly based on a cursory, visual assessment of whether the data are of desert, forest, urban, littoral, jungle, alpine, etc., terrains. Additionally, often the design of HSI collection experiments is based on the acquisition of data of the various backgrounds or of objects of interest within the various terrain types. These data are for assessing and quantifying algorithm performance as well as for algorithm development activities. Here, results of an investigation into the validity of the backgrounds-driven mode of characterizing the diversity of hyperspectral data are presented. HSI data are described quantitatively, in the space where most algorithms operate: n-dimensional (n-D) hyperspace, where n is the number of bands in an HSI data cube. Nineteen metrics designed to probe hyperspace are applied to 14 HYDICE HSI data cubes that represent nine different backgrounds. Each of the 14 sets (one for each HYDICE cube) of 19 metric values was analyzed for clustering. With the present set of data and metrics, there is no clear, unambiguous break-out of metrics based on the nine different geographic backgrounds. The break-outs clump seemingly unrelated data types together; e.g., littoral and urban/residential. Most metrics are normally distributed and indicate no clustering; one metric is one outlier away from normal (i.e., two clusters); and five are comprised of two distributions (i.e., two clusters). Overall, there are three different break-outs that do not correspond to conventional background categories. Implications of these preliminary results are discussed as are recommendations for future work.

  1. A Study of Consistency in Design Selection and the Rank Ordering of Alternatives Using a Value Driven Design Approach

    NASA Astrophysics Data System (ADS)

    Subramanian, Tenkasi R.

    In the current day, with the rapid advancement in technology, engineering design is growing in complexity. Nowadays, engineers have to deal with design problems that are large, complex and involving multi-level decision analyses. With the increase in complexity and size of systems, the production and development cost tend to overshoot the allocated budget and resources. This often results in project delays and project cancellation. This is particularly true for aerospace systems. Value Driven Design proves to be means to strengthen the design process and help counter such trends. Value Driven is a novel framework for optimization which puts stakeholder preferences at the forefront of the design process to capture their true preferences to present system alternatives that are consistent the stakeholder's expectations. Traditional systems engineering techniques promote communication of stakeholder preferences in the form of requirements which confines the design space by imposing additional constraints on it. This results in a design that does not capture the true preferences of the stakeholder. Value Driven Design provides an alternate approach to design wherein a value function is created that corresponds to the true preferences of the stakeholder. The applicability of VDD broad, but it is imperative to first explore its feasibility to ensure the development of an efficient, robust and elegant system design. The key to understanding the usability of VDD is to investigate the formation, propagation and use of a value function. This research investigates the use of rank correlation metrics to ensure consistent rank ordering of design alternatives, while investigating the fidelity of the value function. The impact of design uncertainties on rank ordering. A satellite design system consisting of a satellite, ground station and launch vehicle is used to demonstrate the use of the metrics to aid in decision support during the design process.

  2. Data-Driven Rightsizing: Integrating Preservation Into the Legacy Cities Landscape

    NASA Astrophysics Data System (ADS)

    Evans, E.; Grosicki, B.

    2017-08-01

    Legacy cities, whose built environments are undergoing transformations due to population loss, are at a critical juncture in their urban history and the historic preservation field has an important role to play. Rapid mobile surveys provide an opportunity for data collection that expands beyond traditional historic criteria, and positions preservationists to be proactive decision-makers and to align with multi-disciplinary partners. Rapid mobile surveys are being utilized in conjunction with in-depth data analysis of comprehensive livability metrics at the parcel, neighborhood, and citywide levels to develop recommendations for reactivating vacant properties. Historic preservationists are spearheading these efforts through a tool called Relocal, which uses 70-85 distinct metrics and a community priority survey to generate parcel-level recommendations for every vacant lot and vacant building in the areas in which it is applied. Local volunteer-led rapid mobile surveys are key to gathering on-the-ground, real-time metrics that serve as Relocal's foundation. These new survey techniques generate usable data sets for historic preservation practitioners, land banks, planners, and other entities to inform strategic rightsizing decisions across legacy cities.

  3. Analysis-Driven Design of Representations For Sensing-Action Systems

    DTIC Science & Technology

    2017-10-01

    from the Defense Technical Information Center (DTIC) (http://www.dtic.mil). AFRL-RY-WP-TR-2017-0196 HAS BEEN REVIEWED AND IS APPROVED FOR...Layered Sensing Exploitation Division This report is published in the interest of scientific and technical information exchange, and its...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources

  4. Development of Landscape Metrics to Support Process-Driven Ecological Modeling

    DTIC Science & Technology

    2014-04-01

    channel experiences shoaling due to strong tidal currents transporting sediments and has a symmetrical north-south, tide-dominant ebb delta. A 350...quantitative relationships can be established between landscape pattern formation and environmental or geomorphic processes, then those relationships could...should be aware that notwithstanding any other provision of law , no person shall be subject to any penalty for failing to comply with a collection of

  5. Health impact metrics for air pollution management strategies.

    PubMed

    Martenies, Sheena E; Wilkins, Donele; Batterman, Stuart A

    2015-12-01

    Health impact assessments (HIAs) inform policy and decision making by providing information regarding future health concerns, and quantitative HIAs now are being used for local and urban-scale projects. HIA results can be expressed using a variety of metrics that differ in meaningful ways, and guidance is lacking with respect to best practices for the development and use of HIA metrics. This study reviews HIA metrics pertaining to air quality management and presents evaluative criteria for their selection and use. These are illustrated in a case study where PM2.5 concentrations are lowered from 10 to 8μg/m(3) in an urban area of 1.8 million people. Health impact functions are used to estimate the number of premature deaths, unscheduled hospitalizations and other morbidity outcomes. The most common metric in recent quantitative HIAs has been the number of cases of adverse outcomes avoided. Other metrics include time-based measures, e.g., disability-adjusted life years (DALYs), monetized impacts, functional-unit based measures, e.g., benefits per ton of emissions reduced, and other economic indicators, e.g., cost-benefit ratios. These metrics are evaluated by considering their comprehensiveness, the spatial and temporal resolution of the analysis, how equity considerations are facilitated, and the analysis and presentation of uncertainty. In the case study, the greatest number of avoided cases occurs for low severity morbidity outcomes, e.g., asthma exacerbations (n=28,000) and minor-restricted activity days (n=37,000); while DALYs and monetized impacts are driven by the severity, duration and value assigned to a relatively low number of premature deaths (n=190 to 230 per year). The selection of appropriate metrics depends on the problem context and boundaries, the severity of impacts, and community values regarding health. The number of avoided cases provides an estimate of the number of people affected, and monetized impacts facilitate additional economic analyses useful to policy analysis. DALYs are commonly used as an aggregate measure of health impacts and can be used to compare impacts across studies. Benefits per ton metrics may be appropriate when changes in emissions rates can be estimated. To address community concerns and HIA objectives, a combination of metrics is suggested. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Perceptual full-reference quality assessment of stereoscopic images by considering binocular visual characteristics.

    PubMed

    Shao, Feng; Lin, Weisi; Gu, Shanbo; Jiang, Gangyi; Srikanthan, Thambipillai

    2013-05-01

    Perceptual quality assessment is a challenging issue in 3D signal processing research. It is important to study 3D signal directly instead of studying simple extension of the 2D metrics directly to the 3D case as in some previous studies. In this paper, we propose a new perceptual full-reference quality assessment metric of stereoscopic images by considering the binocular visual characteristics. The major technical contribution of this paper is that the binocular perception and combination properties are considered in quality assessment. To be more specific, we first perform left-right consistency checks and compare matching error between the corresponding pixels in binocular disparity calculation, and classify the stereoscopic images into non-corresponding, binocular fusion, and binocular suppression regions. Also, local phase and local amplitude maps are extracted from the original and distorted stereoscopic images as features in quality assessment. Then, each region is evaluated independently by considering its binocular perception property, and all evaluation results are integrated into an overall score. Besides, a binocular just noticeable difference model is used to reflect the visual sensitivity for the binocular fusion and suppression regions. Experimental results show that compared with the relevant existing metrics, the proposed metric can achieve higher consistency with subjective assessment of stereoscopic images.

  7. Performance metrics for Inertial Confinement Fusion implosions: aspects of the technical framework for measuring progress in the National Ignition Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, B K; Glenzer, S; Edwards, M J

    The National Ignition Campaign (NIC) uses non-igniting 'THD' capsules to study and optimize the hydrodynamic assembly of the fuel without burn. These capsules are designed to simultaneously reduce DT neutron yield and to maintain hydrodynamic similarity with the DT ignition capsule. We will discuss nominal THD performance and the associated experimental observables. We will show the results of large ensembles of numerical simulations of THD and DT implosions and their simulated diagnostic outputs. These simulations cover a broad range of both nominal and off nominal implosions. We will focus on the development of an experimental implosion performance metric called themore » experimental ignition threshold factor (ITFX). We will discuss the relationship between ITFX and other integrated performance metrics, including the ignition threshold factor (ITF), the generalized Lawson criterion (GLC), and the hot spot pressure (HSP). We will then consider the experimental results of the recent NIC THD campaign. We will show that we can observe the key quantities for producing a measured ITFX and for inferring the other performance metrics. We will discuss trends in the experimental data, improvement in ITFX, and briefly the upcoming tuning campaign aimed at taking the next steps in performance improvement on the path to ignition on NIF.« less

  8. Guiding bioprocess design by microbial ecology.

    PubMed

    Volmer, Jan; Schmid, Andreas; Bühler, Bruno

    2015-06-01

    Industrial bioprocess development is driven by profitability and eco-efficiency. It profits from an early stage definition of process and biocatalyst design objectives. Microbial bioprocess environments can be considered as synthetic technical microbial ecosystems. Natural systems follow Darwinian evolution principles aiming at survival and reproduction. Technical systems objectives are eco-efficiency, productivity, and profitable production. Deciphering technical microbial ecology reveals differences and similarities of natural and technical systems objectives, which are discussed in this review in view of biocatalyst and process design and engineering strategies. Strategies for handling opposing objectives of natural and technical systems and for exploiting and engineering natural properties of microorganisms for technical systems are reviewed based on examples. This illustrates the relevance of considering microbial ecology for bioprocess design and the potential for exploitation by synthetic biology strategies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Evaluation Statistics Computed for the Wave Information Studies (WIS)

    DTIC Science & Technology

    2016-07-01

    Studies (WIS) by Mary A. Bryant, Tyler J. Hesser, and Robert E. Jensen PURPOSE: This Coastal and Hydraulics Engineering Technical Note (CHETN...describes the statistical metrics used by the Wave Information Studies (WIS) and produced as part of the model evaluation process. INTRODUCTION: The...gauge locations along the Pacific, Great Lakes, Gulf of Mexico , Atlantic, and Western Alaska coasts. Estimates of wave climatology produced by ocean

  10. Development of Metrics for Trust in Automation

    DTIC Science & Technology

    2010-06-01

    Systems Literature Review Defence Research and Development Canada Toronto No. CR-2003-096 Ajzen , I ., & Fishbein , M . (1980). Understanding attitudes...theory and research (pp. 261–287). Thousand Oaks, CA: Sage. Moray, N., Inagaki, T., Itoh, M ., 2000 . Adaptive automation, trust, and self-confidence...Assurance Technical Framework document ( 2000 ), the term ‘trust’ is used 352 times, ranging from reference to the trustworthiness of technology, to

  11. Dropouts from the Great City Schools Vol. 1. Technical Analyses of Dropout Statistics in Selected Districts.

    ERIC Educational Resources Information Center

    Stevens, Floraline, Comp.

    To address the important issue of dropouts from their schools, the Council of Great City Schools undertook a major research effort to make sense of the disparate ways in which cities keep their dropout data, and to advise various policy makers on the development of common metrics for measuring the problem. A survey of Council member schools…

  12. Screen Fingerprints as a Novel Modality for Active Authentication

    DTIC Science & Technology

    2014-03-01

    and mouse dynamics [9]. Some other examples of the computational behavior metrics of the cognitive fingerprint include eye tracking, how Approved...SCREEN FINGERPRINTS AS A NOVEL MODALITY FOR ACTIVE AUTHENTICATION UNIVERSITY OF MARYLAND MARCH 2014 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC...COVERED (From - To) MAY 2012 – OCT 2013 4. TITLE AND SUBTITLE SCREEN FINGERPRINTS AS A NOVEL MODALITY FOR ACTIVE AUTHENTICATION 5a. CONTRACT

  13. Self-Metric Software. Volume I. Summary of Technical Progress.

    DTIC Science & Technology

    1980-04-01

    Development: A CSDL Project History, RADC-TR-77-213, pp. 33-41. A-42186. [3] Goodenough, J. B. and Zara , R. V., "The Effect of Software Structure on Software...1979. **Visiting assistant professor. 99 MISION Of Rome Air Devlopmnt Centfr RWV pta"aa nd eXgdatAA ’~AW&W4 dwveput, ’t* &a -a # "*ate 4UZtLug ~W~A~n

  14. F-35 Joint Strike Fighter Aircraft (F-35)

    DTIC Science & Technology

    2013-12-01

    Critical Design Review; announcing the decision to terminate development of an alternate Helmet Mounted Display System (HMDS); completing the 2nd F-35B...the 100th aircraft from the production facility at Fort Worth, Texas; and resolving lingering technical design shortfalls to include the F-35C...emphasis on: regular design reviews, systems engineering discipline, software development planning with baseline review boards, and focused metrics

  15. JPRS Report Africa (Sub-Sahara)

    DTIC Science & Technology

    1987-10-16

    be increased to over 1,000 units annually. Fabrica Nacional de Condutores Electricos , S.A. (Cel-Cat), a Portuguese firm, was contracted to carry...out the first phase of the technical rehabili- tation of Fabrica de Condutores Electricos de Mocambique (CELMOQUE). The Portuguese company will...entire undertaking, Fabrica de Condutores Electricos de Mozambique will have to handle 4 metric kilotons of aluminum steel an- nually, which will

  16. The Prodiguer Messaging Platform

    NASA Astrophysics Data System (ADS)

    Denvil, S.; Greenslade, M. A.; Carenton, N.; Levavasseur, G.; Raciazek, J.

    2015-12-01

    CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French global climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output are some of the complexities that CONVERGENCE aims to resolve.At any one moment in time, researchers affiliated with the Institut Pierre Simon Laplace (IPSL) climate modeling group, are running hundreds of global climate simulations. These simulations execute upon a heterogeneous set of French High Performance Computing (HPC) environments. The IPSL's simulation execution runtime libIGCM (library for IPSL Global Climate Modeling group) has recently been enhanced so as to support hitherto impossible realtime use cases such as simulation monitoring, data publication, metrics collection, simulation control, visualizations … etc. At the core of this enhancement is Prodiguer: an AMQP (Advanced Message Queue Protocol) based event driven asynchronous distributed messaging platform. libIGCM now dispatches copious amounts of information, in the form of messages, to the platform for remote processing by Prodiguer software agents at IPSL servers in Paris. Such processing takes several forms: Persisting message content to database(s); Launching rollback jobs upon simulation failure; Notifying downstream applications; Automation of visualization pipelines; We will describe and/or demonstrate the platform's: Technical implementation; Inherent ease of scalability; Inherent adaptiveness in respect to supervising simulations; Web portal receiving simulation notifications in realtime.

  17. Full immersion simulation: validation of a distributed simulation environment for technical and non-technical skills training in Urology.

    PubMed

    Brewin, James; Tang, Jessica; Dasgupta, Prokar; Khan, Muhammad S; Ahmed, Kamran; Bello, Fernando; Kneebone, Roger; Jaye, Peter

    2015-07-01

    To evaluate the face, content and construct validity of the distributed simulation (DS) environment for technical and non-technical skills training in endourology. To evaluate the educational impact of DS for urology training. DS offers a portable, low-cost simulated operating room environment that can be set up in any open space. A prospective mixed methods design using established validation methodology was conducted in this simulated environment with 10 experienced and 10 trainee urologists. All participants performed a simulated prostate resection in the DS environment. Outcome measures included surveys to evaluate the DS, as well as comparative analyses of experienced and trainee urologist's performance using real-time and 'blinded' video analysis and validated performance metrics. Non-parametric statistical methods were used to compare differences between groups. The DS environment demonstrated face, content and construct validity for both non-technical and technical skills. Kirkpatrick level 1 evidence for the educational impact of the DS environment was shown. Further studies are needed to evaluate the effect of simulated operating room training on real operating room performance. This study has shown the validity of the DS environment for non-technical, as well as technical skills training. DS-based simulation appears to be a valuable addition to traditional classroom-based simulation training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.

  18. Software Quality Assurance Metrics

    NASA Technical Reports Server (NTRS)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  19. Contract Source Selection: An Analysis of Lowest Price Technically Acceptable and Tradeoff Strategies

    DTIC Science & Technology

    2016-06-01

    thorough market research, acquisition professionals must decide at an early stage which source selection strategy (lowest price technically...minimizing risk and ensuring best value for all stakeholders. On the basis of thorough market research, acquisition professionals must decide at an early...price-based, market -driven environment from requirements development through properly disposal. Source selection must be 8 made on a ‘best value

  20. Novel Biomarker for Evaluating Ischemic Stress Using an Electrogram Derived Phase Space

    PubMed Central

    Good, Wilson W.; Erem, Burak; Coll-Font, Jaume; Brooks, Dana H.; MacLeod, Rob S.

    2017-01-01

    The underlying pathophysiology of ischemia is poorly understood, resulting in unreliable clinical diagnosis of this disease. This limited knowledge of underlying mechanisms suggested a data driven approach, which seeks to identify patterns in the ECG data that can be linked statistically to underlying behavior and conditions of ischemic tissue. Previous studies have suggested that an approach known as Laplacian eigenmaps (LE) can identify trajectories, or manifolds, that are sensitive to different spatiotemporal consequences of ischemic stress, and thus serve as potential clinically relevant biomarkers. We applied the LE approach to measured transmural potentials in several canine preparations, recorded during control and ischemic conditions, and discovered regions on an approximated QRS-derived manifold that were sensitive to ischemia. By identifying a vector pointing to ischemia-associated changes to the manifold and measuring the shift in trajectories along that vector during ischemia, which we denote as Mshift, it was possible to also pull that vector back into signal space and determine which electrodes were responsible for driving the observed changes in the manifold. We refer to the signal space change as the manifold differential (Mdiff). Both the Mdiff and Mshift metrics show a similar degree of sensitivity to ischemic changes as standard metrics applied during the ST segment in detecting ischemic regions. The new metrics also were able to distinguish between sub-types of ischemia. Thus our results indicate that it may be possible to use the Mshift and Mdiff metrics along with ST derived metrics to determine whether tissue within the myocardium is ischemic or not. PMID:28451594

  1. Novel Biomarker for Evaluating Ischemic Stress Using an Electrogram Derived Phase Space.

    PubMed

    Good, Wilson W; Erem, Burak; Coll-Font, Jaume; Brooks, Dana H; MacLeod, Rob S

    2016-09-01

    The underlying pathophysiology of ischemia is poorly understood, resulting in unreliable clinical diagnosis of this disease. This limited knowledge of underlying mechanisms suggested a data driven approach, which seeks to identify patterns in the ECG data that can be linked statistically to underlying behavior and conditions of ischemic tissue. Previous studies have suggested that an approach known as Laplacian eigenmaps (LE) can identify trajectories, or manifolds, that are sensitive to different spatiotemporal consequences of ischemic stress, and thus serve as potential clinically relevant biomarkers. We applied the LE approach to measured transmural potentials in several canine preparations, recorded during control and ischemic conditions, and discovered regions on an approximated QRS-derived manifold that were sensitive to ischemia. By identifying a vector pointing to ischemia-associated changes to the manifold and measuring the shift in trajectories along that vector during ischemia, which we denote as Mshift, it was possible to also pull that vector back into signal space and determine which electrodes were responsible for driving the observed changes in the manifold. We refer to the signal space change as the manifold differential (Mdiff). Both the Mdiff and Mshift metrics show a similar degree of sensitivity to ischemic changes as standard metrics applied during the ST segment in detecting ischemic regions. The new metrics also were able to distinguish between sub-types of ischemia. Thus our results indicate that it may be possible to use the Mshift and Mdiff metrics along with ST derived metrics to determine whether tissue within the myocardium is ischemic or not.

  2. The widest practicable dissemination: The NASA technical report server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Binkley, Robert L.; Kellogg, Yvonne D.; Paulson, Sharon S.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael; Accomazzi, Alberto

    1995-01-01

    The search for innovative methods to distribute NASA's information lead a gross-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the services over the initial 6-month period. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained will allow NASA to ensure that its institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  3. Initial Readability Assessment of Clinical Trial Eligibility Criteria

    PubMed Central

    Kang, Tian; Elhadad, Noémie; Weng, Chunhua

    2015-01-01

    Various search engines are available to clinical trial seekers. However, it remains unknown how comprehensible clinical trial eligibility criteria used for recruitment are to a lay audience. This study initially investigated this problem. Readability of eligibility criteria was assessed according to (i) shallow and lexical characteristics through the use of an established, generic readability metric; (ii) syntactic characteristics through natural language processing techniques; and (iii) health terminological characteristics through an automated comparison to technical and lay health texts. We further stratified clinical trials according to various study characteristics (e.g., source country or study type) to understand potential factors influencing readability. Mainly caused by frequent use of technical jargons, a college reading level was found to be necessary to understand eligibility criteria text, a level much higher than the average literacy level of the general American population. The use of technical jargons should be minimized to simplify eligibility criteria text. PMID:26958204

  4. Measuring the Value of Public Health Systems: The Disconnect Between Health Economists and Public Health Practitioners

    PubMed Central

    Jacobson, Peter D.; Palmer, Jennifer A.

    2008-01-01

    We investigated ways of defining and measuring the value of services provided by governmental public health systems. Our data sources included literature syntheses and qualitative interviews of public health professionals. Our examination of the health economic literature revealed growing attempts to measure value of public health services explicitly, but few studies have addressed systems or infrastructure. Interview responses demonstrated no consensus on metrics and no connection to the academic literature. Key challenges for practitioners include developing rigorous, data-driven methods and skilled staff; being politically willing to base allocation decisions on economic evaluation; and developing metrics to capture “intangibles” (e.g., social justice and reassurance value). Academic researchers evaluating the economics of public health investments should increase focus on the working needs of public health professionals. PMID:18923123

  5. Bounded Linear Stability Margin Analysis of Nonlinear Hybrid Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Boskovic, Jovan D.

    2008-01-01

    This paper presents a bounded linear stability analysis for a hybrid adaptive control that blends both direct and indirect adaptive control. Stability and convergence of nonlinear adaptive control are analyzed using an approximate linear equivalent system. A stability margin analysis shows that a large adaptive gain can lead to a reduced phase margin. This method can enable metrics-driven adaptive control whereby the adaptive gain is adjusted to meet stability margin requirements.

  6. Noise Levels and Data Analyses for Small Prop-Driven Aircraft

    DTIC Science & Technology

    1983-08-01

    assumption is that the acoustical emission characteristics of the test aircraft remain constant over the 3000 feet between sites. 7.1 Intensity metric...assumed that acoustical emission characteristics of the aircraft are nominally the same as the aircraft passes over the two measurement locations. As...associated with the emission of AIM. Table 12-2 lists the aircraft tested, number of samples, and the mean and standard deviation of the acoustical angle. The

  7. A Distributed Value of Information (VoI)-Based Approach for Mission-Adaptive Context-Aware Information Management and Presentation

    DTIC Science & Technology

    2016-05-16

    metrics involve regulating automation of complex systems , such as aircraft .12 Additionally, adaptive management of content in user interfaces has also...both the user and environmental context would aid in deciding how to present the information to the Warfighter. The prototype system currently...positioning system , and rate sensors can provide user - specific context to disambiguate physiologic data. The consumer “quantified self” market has driven

  8. Fronto-temporal connectivity predicts cognitive empathy deficits and experiential negative symptoms in schizophrenia.

    PubMed

    Abram, Samantha V; Wisner, Krista M; Fox, Jaclyn M; Barch, Deanna M; Wang, Lei; Csernansky, John G; MacDonald, Angus W; Smith, Matthew J

    2017-03-01

    Impaired cognitive empathy is a core social cognitive deficit in schizophrenia associated with negative symptoms and social functioning. Cognitive empathy and negative symptoms have also been linked to medial prefrontal and temporal brain networks. While shared behavioral and neural underpinnings are suspected for cognitive empathy and negative symptoms, research is needed to test these hypotheses. In two studies, we evaluated whether resting-state functional connectivity between data-driven networks, or components (referred to as, inter-component connectivity), predicted cognitive empathy and experiential and expressive negative symptoms in schizophrenia subjects. Study 1: We examined associations between cognitive empathy and medial prefrontal and temporal inter-component connectivity at rest using a group-matched schizophrenia and control sample. We then assessed whether inter-component connectivity metrics associated with cognitive empathy were also related to negative symptoms. Study 2: We sought to replicate the connectivity-symptom associations observed in Study 1 using an independent schizophrenia sample. Study 1 results revealed that while the groups did not differ in average inter-component connectivity, a medial-fronto-temporal metric and an orbito-fronto-temporal metric were related to cognitive empathy. Moreover, the medial-fronto-temporal metric was associated with experiential negative symptoms in both schizophrenia samples. These findings support recent models that link social cognition and negative symptoms in schizophrenia. Hum Brain Mapp 38:1111-1124, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Customer Driven Uniform Manufacture (CDUM) Program. Customer Driven Uniform Management Apparel Research

    DTIC Science & Technology

    2008-11-13

    Final Technical Report 4 consumption patterns, and production status. The current version of the AAVS DataMart contains apparel and textile data...which stores the summary of the activity by item; Daily Issues which contains all the issues for the day; Daily Receipts which contains all receipts...entered for the day; and, Open Requisitions which contains all open DSCP Requisitions and Local Purchase Orders. Supply and financial transactions are

  10. Socioeconomic Forecasting : [Technical Summary

    DOT National Transportation Integrated Search

    2012-01-01

    Because the traffic forecasts produced by the Indiana : Statewide Travel Demand Model (ISTDM) are driven by : the demographic and socioeconomic inputs to the model, : particular attention must be given to obtaining the most : accurate demographic and...

  11. USACM Thematic Workshop On Uncertainty Quantification And Data-Driven Modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, James R.

    The USACM Thematic Workshop on Uncertainty Quantification and Data-Driven Modeling was held on March 23-24, 2017, in Austin, TX. The organizers of the technical program were James R. Stewart of Sandia National Laboratories and Krishna Garikipati of University of Michigan. The administrative organizer was Ruth Hengst, who serves as Program Coordinator for the USACM. The organization of this workshop was coordinated through the USACM Technical Thrust Area on Uncertainty Quantification and Probabilistic Analysis. The workshop website (http://uqpm2017.usacm.org) includes the presentation agenda as well as links to several of the presentation slides (permission to access the presentations was granted by eachmore » of those speakers, respectively). Herein, this final report contains the complete workshop program that includes the presentation agenda, the presentation abstracts, and the list of posters.« less

  12. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    PubMed

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  13. Seeking Balance in Cyber Education

    DTIC Science & Technology

    2015-02-01

    properties that can be applied to computer systems, networks, and software. For example, in our Introduction to Cyber Security Course, given to...Below is the submittal schedule for the areas of emphasis we are looking for: Data Mining in Metrics? Jul/ JAug 2015 Issue Submission Deadline: Feb...Phone Arena. PhoneArena.com, 12 Nov. 2013. Web. 08 Aug. 2014. 8. Various. “SI110: Introduction to Cyber Security, Technical Foundations.” SI110

  14. Evaluating the effectiveness of the Minamata Convention on Mercury: Principles and recommendations for next steps.

    PubMed

    Evers, David C; Keane, Susan Egan; Basu, Niladri; Buck, David

    2016-11-01

    The Minamata Convention on Mercury is a multilateral environmental agreement that obligates Parties to reduce or control sources of mercury pollution in order to protect human health and the environment. The Convention includes provisions on providing technical assistance and capacity building, particularly for developing countries and countries with economies in transition, to promote its effective implementation. Evaluating the effectiveness of the Convention (as required by Article 22) is a crucial component to ensure that it meets this objective. We describe an approach to measure effectiveness, which includes a suite of short-, medium-, and long-term metrics related to five major mercury control Articles in the Convention, as well as metrics derived from monitoring of mercury in the environment using select bioindicators, including people. The use of existing biotic Hg data will define spatial gradients (e.g., biological mercury hotspots), baselines to develop relevant temporal trends, and an ability to assess risk to taxa and human communities of greatest concern. We also recommend the development of a technical document that describes monitoring options for the Conference of Parties, to provide science-based standardized guidelines for collecting relevant monitoring information, as guided by Article 19. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. No Evidence That Gratitude Enhances Neural Performance Monitoring or Conflict-Driven Control

    PubMed Central

    Saunders, Blair; He, Frank F. H.; Inzlicht, Michael

    2015-01-01

    It has recently been suggested that gratitude can benefit self-regulation by reducing impulsivity during economic decision making. We tested if comparable benefits of gratitude are observed for neural performance monitoring and conflict-driven self-control. In a pre-post design, 61 participants were randomly assigned to either a gratitude or happiness condition, and then performed a pre-induction flanker task. Subsequently, participants recalled an autobiographical event where they had felt grateful or happy, followed by a post-induction flanker task. Despite closely following existing protocols, participants in the gratitude condition did not report elevated gratefulness compared to the happy group. In regard to self-control, we found no association between gratitude—operationalized by experimental condition or as a continuous predictor—and any control metric, including flanker interference, post-error adjustments, or neural monitoring (the error-related negativity, ERN). Thus, while gratitude might increase economic patience, such benefits may not generalize to conflict-driven control processes. PMID:26633830

  16. No Evidence That Gratitude Enhances Neural Performance Monitoring or Conflict-Driven Control.

    PubMed

    Saunders, Blair; He, Frank F H; Inzlicht, Michael

    2015-01-01

    It has recently been suggested that gratitude can benefit self-regulation by reducing impulsivity during economic decision making. We tested if comparable benefits of gratitude are observed for neural performance monitoring and conflict-driven self-control. In a pre-post design, 61 participants were randomly assigned to either a gratitude or happiness condition, and then performed a pre-induction flanker task. Subsequently, participants recalled an autobiographical event where they had felt grateful or happy, followed by a post-induction flanker task. Despite closely following existing protocols, participants in the gratitude condition did not report elevated gratefulness compared to the happy group. In regard to self-control, we found no association between gratitude--operationalized by experimental condition or as a continuous predictor--and any control metric, including flanker interference, post-error adjustments, or neural monitoring (the error-related negativity, ERN). Thus, while gratitude might increase economic patience, such benefits may not generalize to conflict-driven control processes.

  17. Space station pressurized laboratory safety guidelines

    NASA Technical Reports Server (NTRS)

    Mcgonigal, Les

    1990-01-01

    Before technical safety guidelines and requirements are established, a common understanding of their origin and importance must be shared between Space Station Program Management, the User Community, and the Safety organizations involved. Safety guidelines and requirements are driven by the nature of the experiments, and the degree of crew interaction. Hazard identification; development of technical safety requirements; operating procedures and constraints; provision of training and education; conduct of reviews and evaluations; and emergency preplanning are briefly discussed.

  18. Comparison of Fixed Dental Prostheses with Zirconia and Metal Frameworks: Five-Year Results of a Randomized Controlled Clinical Trial.

    PubMed

    Sailer, Irena; Balmer, Marc; Hüsler, Jürg; Hämmerle, Christoph Hans Franz; Känel, Sarah; Thoma, Daniel Stefan

    The aim of this study was to test whether posterior zirconia-ceramic (ZC) and metal-ceramic (MC) fixed dental prostheses (FDPs) exhibit similar survival and technical/biologic complication rates. A total of 58 patients in need of 76 posterior FDPs were randomly assigned to receive 40 ZC and 36 MC FDPs. The restorations were examined at baseline (cementation) and yearly for 5 years. Technical and biologic outcomes were compared. The independent treatment groups were compared with nonparametric Mann-Whitney test for metric variables and with Fisher exact test for categoric data. A total of 52 patients with 40 ZC and 29 MC FDPs were examined at 5 years. No FDP failed during the 5 years; 2 ZC FDPs failed at 65.4 and 73.3 months. Debonding occurred at 3 ZC FDPs. Technical outcomes (modified US Public Health Service criteria) and general periodontal parameters did not show significant differences between ZC and MC FDPs. ZC FDPs exhibited similar outcomes to MC FDPs based on 5-year survival estimates. The majority of technical and biologic outcome measures were not significantly different.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Gary E.; Hennen, Matthew J.; Zimmerman, Shon A.

    The study reported herein was conducted by the Pacific Northwest National Laboratory (PNNL) and the University of Washington (UW) for the U.S. Army Corps of Engineers, Portland District (USACE). The PNNL and UW project managers were Drs. Thomas J. Carlson and John R. Skalski, respectively. The USACE technical lead was Mr. Brad Eppard. The study was designed to estimate dam passage survival and other performance measures at The Dalles Dam as stipulated by the 2008 Federal Columbia River Power System Biological Opinion (BiOp) and the 2008 Columbia Basin Fish Accords. The study is being documented in two types of reports:more » compliance and technical. A compliance report is delivered within 6 months of the completion of the field season and focuses on results of the performance metrics outlined in the 2008 BiOp and Fish Accords. A technical report is produced within the 18 months after field work, providing comprehensive documentation of a given study and results on route-specific survival estimates and fish passage distributions, which are not included in compliance reports. This technical report concerns the 2011 acoustic telemetry study at The Dalles Dam.« less

  20. Robustness of remote stress detection from visible spectrum recordings

    NASA Astrophysics Data System (ADS)

    Kaur, Balvinder; Moses, Sophia; Luthra, Megha; Ikonomidou, Vasiliki N.

    2016-05-01

    In our recent work, we have shown that it is possible to extract high fidelity timing information of the cardiac pulse wave from visible spectrum videos, which can then be used as a basis for stress detection. In that approach, we used both heart rate variability (HRV) metrics and the differential pulse transit time (dPTT) as indicators of the presence of stress. One of the main concerns in this analysis is its robustness in the presence of noise, as the remotely acquired signal that we call blood wave (BW) signal is degraded with respect to the signal acquired using contact sensors. In this work, we discuss the robustness of our metrics in the presence of multiplicative noise. Specifically, we study the effects of subtle motion due to respiration and changes in illumination levels due to light flickering on the BW signal, the HRV-driven features, and the dPTT. Our sensitivity study involved both Monte Carlo simulations and experimental data from human facial videos, and indicates that our metrics are robust even under moderate amounts of noise. Generated results will help the remote stress detection community with developing requirements for visual spectrum based stress detection systems.

  1. Quantitative and qualitative research across cultures and languages: cultural metrics and their application.

    PubMed

    Wagner, Wolfgang; Hansen, Karolina; Kronberger, Nicole

    2014-12-01

    Growing globalisation of the world draws attention to cultural differences between people from different countries or from different cultures within the countries. Notwithstanding the diversity of people's worldviews, current cross-cultural research still faces the challenge of how to avoid ethnocentrism; comparing Western-driven phenomena with like variables across countries without checking their conceptual equivalence clearly is highly problematic. In the present article we argue that simple comparison of measurements (in the quantitative domain) or of semantic interpretations (in the qualitative domain) across cultures easily leads to inadequate results. Questionnaire items or text produced in interviews or via open-ended questions have culturally laden meanings and cannot be mapped onto the same semantic metric. We call the culture-specific space and relationship between variables or meanings a 'cultural metric', that is a set of notions that are inter-related and that mutually specify each other's meaning. We illustrate the problems and their possible solutions with examples from quantitative and qualitative research. The suggested methods allow to respect the semantic space of notions in cultures and language groups and the resulting similarities or differences between cultures can be better understood and interpreted.

  2. New Methods for the Computational Fabrication of Appearance

    DTIC Science & Technology

    2015-06-01

    disadvantage is that it does not model phenomena such as retro-reflection and grazing-angle e↵ects. We find that previously proposed BRDF metrics performed well...Figure 3.15-right shows the mean BRDF in blue and the corresponding error bars. In order to interpret our data, we fit a parametric model to slices of the...and Wojciech Matusik. Image-driven navigation of analytical brdf models . In Eurographics Symposium on Rendering, 2006. 107 [40] F. E. Nicodemus, J. C

  3. Classical Statistics and Statistical Learning in Imaging Neuroscience

    PubMed Central

    Bzdok, Danilo

    2017-01-01

    Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques. PMID:29056896

  4. Describing Directional Cell Migration with a Characteristic Directionality Time

    PubMed Central

    Loosley, Alex J.; O’Brien, Xian M.; Reichner, Jonathan S.; Tang, Jay X.

    2015-01-01

    Many cell types can bias their direction of locomotion by coupling to external cues. Characteristics such as how fast a cell migrates and the directedness of its migration path can be quantified to provide metrics that determine which biochemical and biomechanical factors affect directional cell migration, and by how much. To be useful, these metrics must be reproducible from one experimental setting to another. However, most are not reproducible because their numerical values depend on technical parameters like sampling interval and measurement error. To address the need for a reproducible metric, we analytically derive a metric called directionality time, the minimum observation time required to identify motion as directionally biased. We show that the corresponding fit function is applicable to a variety of ergodic, directionally biased motions. A motion is ergodic when the underlying dynamical properties such as speed or directional bias do not change over time. Measuring the directionality of nonergodic motion is less straightforward but we also show how this class of motion can be analyzed. Simulations are used to show the robustness of directionality time measurements and its decoupling from measurement errors. As a practical example, we demonstrate the measurement of directionality time, step-by-step, on noisy, nonergodic trajectories of chemotactic neutrophils. Because of its inherent generality, directionality time ought to be useful for characterizing a broad range of motions including intracellular transport, cell motility, and animal migration. PMID:25992908

  5. The relationships between lost work time and duration of absence spells: proposal for a payroll driven measure of absenteeism.

    PubMed

    Hill, James J; Slade, Martin D; Cantley, Linda; Vegso, Sally; Fiellin, Martha; Cullen, Mark R

    2008-07-01

    To propose a standard measure of absenteeism (the work lost rate [WLR]) be included in future research to facilitate understanding and allow for translation of findings between scientific disciplines. Hourly payroll data derived from "punch clock" reports was used to compare various measures of absenteeism used in the literature and the application of the proposed metric (N = 4000 workers). Unpaid hours and full absent days were highly correlated with the WLR (r = 0.896 to 0.898). The highest percentage of unpaid hours (lost work time) is captured by absence spells of 1 and 2 days duration. The proposed WLR metric captures: 1) The range and distribution of the individual WLRs, 2) the percentage of subjects with no unpaid hours, and 3) the population WLR and should be included whenever payroll data is used to measure absenteeism.

  6. The Relationships Between Lost Work Time and Duration of Absence Spells: Proposal for a Payroll Driven Measure of Absenteeism

    PubMed Central

    Hill, James J.; Slade, Martin D.; Cantley, Linda; Vegso, Sally; Fiellin, Martha; Cullen, Mark R.

    2011-01-01

    Objective To propose a standard measure of absenteeism (the work lost rate [WLR]) be included in future research to facilitate understanding and allow for translation of findings between scientific disciplines. Methods Hourly payroll data derived from “punch clock” reports was used to compare various measures of absenteeism used in the literature and the application of the proposed metric (N = 4000 workers). Results Unpaid hours and full absent days were highly correlated with the WLR (r = 0.896 to 0.898). The highest percentage of unpaid hours (lost work time) is captured by absence spells of 1 and 2 days duration. Conclusion The proposed WLR metric captures: 1) The range and distribution of the individual WLRs, 2) the percentage of subjects with no unpaid hours, and 3) the population WLR and should be included whenever payroll data is used to measure absenteeism. PMID:18617841

  7. Cosmological perturbations and noncommutative tachyon inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Daojun; Li Xinzhou

    2004-12-15

    The motivation for studying the rolling tachyon and noncommutative inflation comes from string theory. In the tachyon inflation scenario, metric perturbations are created by tachyon field fluctuations during inflation. We drive the exact mode equation for scalar perturbations of the metric and investigate the cosmological perturbations in the commutative and noncommutative inflationary spacetime driven by the tachyon field which have a Born-Infeld Lagrangian. Although at lowest order the predictions of tachyon inflation are no different than those from standard slow-roll inflation, due to the modified inflationary dynamics there exists modifications to the power spectra of fluctuations generated during inflation. Inmore » the noncommutative tachyon inflation scenario, the stringy noncommutativity of spacetime results in corrections to the primordial power spectrum that lead to a spectral index that is greater than 1 on large scales and less than 1 on small scales as the first-year results of the Wilkinson Microwave Anisotropy Probe indicate.« less

  8. Ocean Carbon Cycle Feedbacks Under Negative Emissions

    NASA Astrophysics Data System (ADS)

    Schwinger, Jörg; Tjiputra, Jerry

    2018-05-01

    Negative emissions will most likely be needed to achieve ambitious climate targets, such as limiting global warming to 1.5°. Here we analyze the ocean carbon-concentration and carbon-climate feedback in an Earth system model under an idealized strong CO2 peak and decline scenario. We find that the ocean carbon-climate feedback is not reversible by means of negative emissions on decadal to centennial timescales. When preindustrial surface climate is restored, the oceans, due to the carbon-climate feedback, still contain about 110 Pg less carbon compared to a simulation without climate change. This result is unsurprising but highlights an issue with a widely used carbon cycle feedback metric. We show that this metric can be greatly improved by using ocean potential temperature as a proxy for climate change. The nonlinearity (nonadditivity) of climate and CO2-driven feedbacks continues to grow after the atmospheric CO2 peak.

  9. A cross-validation package driving Netica with python

    USGS Publications Warehouse

    Fienen, Michael N.; Plant, Nathaniel G.

    2014-01-01

    Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).

  10. An evaluation of data-driven motion estimation in comparison to the usage of external-surrogates in cardiac SPECT imaging

    PubMed Central

    Mukherjee, Joyeeta Mitra; Hutton, Brian F; Johnson, Karen L; Pretorius, P Hendrik; King, Michael A

    2014-01-01

    Motion estimation methods in single photon emission computed tomography (SPECT) can be classified into methods which depend on just the emission data (data-driven), or those that use some other source of information such as an external surrogate. The surrogate-based methods estimate the motion exhibited externally which may not correlate exactly with the movement of organs inside the body. The accuracy of data-driven strategies on the other hand is affected by the type and timing of motion occurrence during acquisition, the source distribution, and various degrading factors such as attenuation, scatter, and system spatial resolution. The goal of this paper is to investigate the performance of two data-driven motion estimation schemes based on the rigid-body registration of projections of motion-transformed source distributions to the acquired projection data for cardiac SPECT studies. Comparison is also made of six intensity based registration metrics to an external surrogate-based method. In the data-driven schemes, a partially reconstructed heart is used as the initial source distribution. The partially-reconstructed heart has inaccuracies due to limited angle artifacts resulting from using only a part of the SPECT projections acquired while the patient maintained the same pose. The performance of different cost functions in quantifying consistency with the SPECT projection data in the data-driven schemes was compared for clinically realistic patient motion occurring as discrete pose changes, one or two times during acquisition. The six intensity-based metrics studied were mean-squared difference (MSD), mutual information (MI), normalized mutual information (NMI), pattern intensity (PI), normalized cross-correlation (NCC) and entropy of the difference (EDI). Quantitative and qualitative analysis of the performance is reported using Monte-Carlo simulations of a realistic heart phantom including degradation factors such as attenuation, scatter and system spatial resolution. Further the visual appearance of motion-corrected images using data-driven motion estimates was compared to that obtained using the external motion-tracking system in patient studies. Pattern intensity and normalized mutual information cost functions were observed to have the best performance in terms of lowest average position error and stability with degradation of image quality of the partial reconstruction in simulations. In all patients, the visual quality of PI-based estimation was either significantly better or comparable to NMI-based estimation. Best visual quality was obtained with PI-based estimation in 1 of the 5 patient studies, and with external-surrogate based correction in 3 out of 5 patients. In the remaining patient study there was little motion and all methods yielded similar visual image quality. PMID:24107647

  11. Unraveling Landscape Complexity: Land Use/Land Cover Changes and Landscape Pattern Dynamics (1954-2008) in Contrasting Peri-Urban and Agro-Forest Regions of Northern Italy.

    PubMed

    Smiraglia, D; Ceccarelli, T; Bajocco, S; Perini, L; Salvati, L

    2015-10-01

    This study implements an exploratory data analysis of landscape metrics and a change detection analysis of land use and population density to assess landscape dynamics (1954-2008) in two physiographic zones (plain and hilly-mountain area) of Emilia Romagna, northern Italy. The two areas are characterized by different landscape types: a mixed urban-rural landscape dominated by arable land and peri-urban settlements in the plain and a traditional agro-forest landscape in the hilly-mountain area with deciduous and conifer forests, scrublands, meadows, and crop mosaic. Urbanization and, to a lesser extent, agricultural intensification were identified as the processes underlying landscape change in the plain. Land abandonment determining natural forestation and re-forestation driven by man was identified as the process of change most representative of the hilly-mountain area. Trends in landscape metrics indicate a shift toward more fragmented and convoluted patterns in both areas. Number of patches, the interspersion and juxtaposition index, and the large patch index are the metrics discriminating the two areas in terms of landscape patterns in 1954. In 2008, mean patch size, edge density, interspersion and juxtaposition index, and mean Euclidean nearest neighbor distance were the metrics with the most different spatial patterns in the two areas. The exploratory data analysis of landscape metrics contributed to link changes over time in both landscape composition and configuration providing a comprehensive picture of landscape transformations in a wealthy European region. Evidence from this study are hoped to inform sustainable land management designed for homogeneous landscape units in similar socioeconomic contexts.

  12. Comparing alternative and traditional dissemination metrics in medical education.

    PubMed

    Amath, Aysah; Ambacher, Kristin; Leddy, John J; Wood, Timothy J; Ramnanan, Christopher J

    2017-09-01

    The impact of academic scholarship has traditionally been measured using citation-based metrics. However, citations may not be the only measure of impact. In recent years, other platforms (e.g. Twitter) have provided new tools for promoting scholarship to both academic and non-academic audiences. Alternative metrics (altmetrics) can capture non-traditional dissemination data such as attention generated on social media platforms. The aims of this exploratory study were to characterise the relationships among altmetrics, access counts and citations in an international and pre-eminent medical education journal, and to clarify the roles of these metrics in assessing the impact of medical education academic scholarship. A database study was performed (September 2015) for all papers published in Medical Education in 2012 (n = 236) and 2013 (n = 246). Citation, altmetric and access (HTML views and PDF downloads) data were obtained from Scopus, the Altmetric Bookmarklet tool and the journal Medical Education, respectively. Pearson coefficients (r-values) between metrics of interest were then determined. Twitter and Mendeley (an academic bibliography tool) were the only altmetric-tracked platforms frequently (> 50%) utilised in the dissemination of articles. Altmetric scores (composite measures of all online attention) were driven by Twitter mentions. For short and full-length articles in 2012 and 2013, both access counts and citation counts were most strongly correlated with one another, as well as with Mendeley downloads. By comparison, Twitter metrics and altmetric scores demonstrated weak to moderate correlations with both access and citation counts. Whereas most altmetrics showed limited correlations with readership (access counts) and impact (citations), Mendeley downloads correlated strongly with both readership and impact indices for articles published in the journal Medical Education and may therefore have potential use that is complementary to that of citations in assessment of the impact of medical education scholarship. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  13. JPRS Report Environmental Issues.

    DTIC Science & Technology

    1990-04-10

    set of government regulations covering their Far East are to be guaranteed the territories of tradi- extraction on a goal-oriented scientific-technical...700 [Table by USSR State Committee for Statistics] metric tons of nitrates were discharged into Baykal; of [Text] Discharges of Harmful Substances Into...Previous the siting of new sources of electric power on the oblast’s Item": "From a Sea of Lies to a Field of Rye "] periphery, it being impermissible to

  14. Proceedings of the Augmented VIsual Display (AVID) Research Workshop

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)

    1993-01-01

    The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.

  15. United States Air Force Summer Faculty Research Program (1983). Technical Report. Volume 1

    DTIC Science & Technology

    1983-12-01

    1968 Aerospace Engineering Department Specialty: Physical Fluid Dynamics Tullahoma, TN 37388 Assigned: AEDC e (613) 455-0631 Dr. Richard Conte...aLid Psycho- Psychology Department metrics Norfolk, VA 23508 Assigned: HRL/B -’ (804) 440-4235 Dr Fred E . Domann Degree: Ph.D., Physics, 1975...Assigned: APL Dayton, OH 45469(513) 229-2835 -*7* S.. * . e "-i..’i’._.:’,’,’-.:’,,-.. . - ... ,- . . • .,-- ". -’. ,, ..v

  16. Department of Defense Software Factbook

    DTIC Science & Technology

    2017-07-07

    parameters, these rules of thumb may not provide a lot of value to project managers estimating their software efforts. To get the information useful to them...organization determine the total cost of a particular project , but it is a useful metric to technical managers when they are required to submit an annual...outcome. It is most likely a combination of engineering, management , and funding factors. Although a project may resist planning a schedule slip, this

  17. A bio-hybrid anaerobic treatment of papaya processing wastes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, P.Y.; Chou, C.Y.

    1987-01-01

    Hybrid anaerobic treatment of papaya processing wastes is technically feasible. At 30/sup 0/C, the optimal organic loading rates for maximizing organic removal efficiency and methane production are 1.3 and 4.8 g TCOD/1/day, respectively. Elimination of post-handling and treatment of digested effluent can also be achieved. The system is more suitable for those processing plants with a waste amount of more than 3,000 metric tons per year.

  18. Metrics of Software Quality.

    DTIC Science & Technology

    1980-11-01

    Systems: A Raytheon Project History", RADC-TR-77-188, Final Technical Report, June 1977. 4. IBM Federal Systems Division, "Statistical Prediction of...147, June 1979. 4. W. D. Brooks, R. W. Motley, "Analysis of Discrete Software Reliability Models", IBM Corp., RADC-TR-80-84, RADC, New York, April 1980...J. C. King of IBM (Reference 9) and Lori A. Clark (Reference 10) of the University of Massachusetts. Programs, so exercised must be augmented so they

  19. Roadway Traffic Data Collection from Mobile Platforms, Technical Summary

    DOT National Transportation Integrated Search

    2017-07-28

    This project empirically investigates the traffic flow estimations from different types of data collected from two types of mobile platforms transit buses in service operations and a van driven to emulate bus coverage that repeatedly traverse...

  20. Model-Driven Configuration of SELinux Policies

    NASA Astrophysics Data System (ADS)

    Agreiter, Berthold; Breu, Ruth

    The need for access control in computer systems is inherent. However, the complexity to configure such systems is constantly increasing which affects the overall security of a system negatively. We think that it is important to define security requirements on a non-technical level while taking the application domain into respect in order to have a clear and separated view on security configuration (i.e. unblurred by technical details). On the other hand, security functionality has to be tightly integrated with the system and its development process in order to provide comprehensive means of enforcement. In this paper, we propose a systematic approach based on model-driven security configuration to leverage existing operating system security mechanisms (SELinux) for realising access control. We use UML models and develop a UML profile to satisfy these needs. Our goal is to exploit a comprehensive protection mechanism while rendering its security policy manageable by a domain specialist.

  1. Virtual Preoperative Planning and Intraoperative Navigation in Facial Prosthetic Reconstruction: A Technical Note.

    PubMed

    Verma, Suzanne; Gonzalez, Marianela; Schow, Sterling R; Triplett, R Gilbert

    This technical protocol outlines the use of computer-assisted image-guided technology for the preoperative planning and intraoperative procedures involved in implant-retained facial prosthetic treatment. A contributing factor for a successful prosthetic restoration is accurate preoperative planning to identify prosthetically driven implant locations that maximize bone contact and enhance cosmetic outcomes. Navigational systems virtually transfer precise digital planning into the operative field for placing implants to support prosthetic restorations. In this protocol, there is no need to construct a physical, and sometimes inaccurate, surgical guide. The report addresses treatment workflow, radiologic data specifications, and special considerations in data acquisition, virtual preoperative planning, and intraoperative navigation for the prosthetic reconstruction of unilateral, bilateral, and midface defects. Utilization of this protocol for the planning and surgical placement of craniofacial bone-anchored implants allows positioning of implants to be prosthetically driven, accurate, precise, and efficient, and leads to a more predictable treatment outcome.

  2. Fundamentals of neurosurgery: virtual reality tasks for training and evaluation of technical skills.

    PubMed

    Choudhury, Nusrat; Gélinas-Phaneuf, Nicholas; Delorme, Sébastien; Del Maestro, Rolando

    2013-11-01

    Technical skills training in neurosurgery is mostly done in the operating room. New educational paradigms are encouraging the development of novel training methods for surgical skills. Simulation could answer some of these needs. This article presents the development of a conceptual training framework for use on a virtual reality neurosurgical simulator. Appropriate tasks were identified by reviewing neurosurgical oncology curricula requirements and performing cognitive task analyses of basic techniques and representative surgeries. The tasks were then elaborated into training modules by including learning objectives, instructions, levels of difficulty, and performance metrics. Surveys and interviews were iteratively conducted with subject matter experts to delimitate, review, discuss, and approve each of the development stages. Five tasks were selected as representative of basic and advanced neurosurgical skill. These tasks were: 1) ventriculostomy, 2) endoscopic nasal navigation, 3) tumor debulking, 4) hemostasis, and 5) microdissection. The complete training modules were structured into easy, intermediate, and advanced settings. Performance metrics were also integrated to provide feedback on outcome, efficiency, and errors. The subject matter experts deemed the proposed modules as pertinent and useful for neurosurgical skills training. The conceptual framework presented here, the Fundamentals of Neurosurgery, represents a first attempt to develop standardized training modules for technical skills acquisition in neurosurgical oncology. The National Research Council Canada is currently developing NeuroTouch, a virtual reality simulator for cranial microneurosurgery. The simulator presently includes the five Fundamentals of Neurosurgery modules at varying stages of completion. A first pilot study has shown that neurosurgical residents obtained higher performance scores on the simulator than medical students. Further work will validate its components and use in a training curriculum. Copyright © 2013 N. Choudhury. Published by Elsevier Inc. All rights reserved.

  3. Closed-form solutions in stress-driven two-phase integral elasticity for bending of functionally graded nano-beams

    NASA Astrophysics Data System (ADS)

    Barretta, Raffaele; Fabbrocino, Francesco; Luciano, Raimondo; Sciarra, Francesco Marotti de

    2018-03-01

    Strain-driven and stress-driven integral elasticity models are formulated for the analysis of the structural behaviour of fuctionally graded nano-beams. An innovative stress-driven two-phases constitutive mixture defined by a convex combination of local and nonlocal phases is presented. The analysis reveals that the Eringen strain-driven fully nonlocal model cannot be used in Structural Mechanics since it is ill-posed and the local-nonlocal mixtures based on the Eringen integral model partially resolve the ill-posedeness of the model. In fact, a singular behaviour of continuous nano-structures appears if the local fraction tends to vanish so that the ill-posedness of the Eringen integral model is not eliminated. On the contrary, local-nonlocal mixtures based on the stress-driven theory are mathematically and mechanically appropriate for nanosystems. Exact solutions of inflected functionally graded nanobeams of technical interest are established by adopting the new local-nonlocal mixture stress-driven integral relation. Effectiveness of the new nonlocal approach is tested by comparing the contributed results with the ones corresponding to the mixture Eringen theory.

  4. Solar Type II Radio Bursts and IP Type II Events

    NASA Technical Reports Server (NTRS)

    Cane, H. V.; Erickson, W. C.

    2005-01-01

    We have examined radio data from the WAVES experiment on the Wind spacecraft in conjunction with ground-based data in order to investigate the relationship between the shocks responsible for metric type II radio bursts and the shocks in front of coronal mass ejections (CMEs). The bow shocks of fast, large CMEs are strong interplanetary (IP) shocks, and the associated radio emissions often consist of single broad bands starting below approx. 4 MHz; such emissions were previously called IP type II events. In contrast, metric type II bursts are usually narrowbanded and display two harmonically related bands. In addition to displaying complete dynamic spectra for a number of events, we also analyze the 135 WAVES 1 - 14 MHz slow-drift time periods in 2001-2003. We find that most of the periods contain multiple phenomena, which we divide into three groups: metric type II extensions, IP type II events, and blobs and bands. About half of the WAVES listings include probable extensions of metric type II radio bursts, but in more than half of these events, there were also other slow-drift features. In the 3 yr study period, there were 31 IP type II events; these were associated with the very fastest CMEs. The most common form of activity in the WAVES events, blobs and bands in the frequency range between 1 and 8 MHz, fall below an envelope consistent with the early signatures of an IP type II event. However, most of this activity lasts only a few tens of minutes, whereas IP type II events last for many hours. In this study we find many examples in the radio data of two shock-like phenomena with different characteristics that occur simultaneously in the metric and decametric/hectometric bands, and no clear example of a metric type II burst that extends continuously down in frequency to become an IP type II event. The simplest interpretation is that metric type II bursts, unlike IP type II events, are not caused by shocks driven in front of CMEs.

  5. When do drilling alliances add value? The alliance value model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brett, J.F.; Craig, V.B.; Wadsworth, D.B.

    1996-12-31

    A recent GRI report details three previously unstudied aspects of alliances: specific measurable factors that improve alliance success, how a successful alliance should be structured, and when an alliance makes economic sense. The most innovative tool to emerge from the report, the Alliance Value Model, addresses the third aspect. The theory behind the Alliance Value Model is that the long-term viability of any drilling relationship hinges on its ability to create real value and achieve stability. Based upon the report`s findings, the most effective way to form such an alliance is through a detailed description and integration of the technicalmore » processes involved. This new type of process-driven alliance is characterized by a value chain which links together a common set of technical processes, mutually defined bottomline goals, and shared benefits. Building a process-driven alliance requires time and people and therefore has an associated cost. The real value generated by an alliance must exceed this start-up cost. The Alliance Value Model computes the net present value (NPV) of the cash flows for four different operating arrangements: (1) Business As Usual (conventional competitive bidding process), (2) Process-Driven Alliance (linking technical processes to accelerate production and reduce expenses), (3) Incentivized Process-Driven Alliance (linked technical processes with performance incentives to promote stability), and (4) No Drill Case (primarily used to gauge the market value of services). These arrangements test different degrees of process integration between an operator and its suppliers. They can also help determine if the alliance can add enough value to exceed startup costs and if the relationship will be stable. Each partner can test the impact of the relational structure on its own profitability. When an alliance is warranted, all participants can benefit from real value generated in a stable relationship.« less

  6. Using Landscape Analysis to Test Hypotheses about Drivers of Tick Abundance and Infection Prevalence with Borrelia burgdorferi.

    PubMed

    Ferrell, A Michelle; Brinkerhoff, R Jory

    2018-04-12

    Patterns of vector-borne disease risk are changing globally in space and time and elevated disease risk of vector-borne infection can be driven by anthropogenic modification of the environment. Incidence of Lyme disease, caused by the bacterium Borrelia burgdorferi sensu stricto, has risen in a number of locations in North America and this increase may be driven by spatially or numerically expanding populations of the primary tick vector, Ixodes scapularis . We used a model selection approach to identify habitat fragmentation and land-use/land cover variables to test the hypothesis that the amount and configuration of forest cover at spatial scales relevant to deer, the primary hosts of adult ticks, would be the predominant determinants of tick abundance. We expected that land cover heterogeneity and amount of forest edge, a habitat thought to facilitate deer foraging and survival, would be the strongest driver of tick density and that larger spatial scales (5-10 km) would be more important than smaller scales (1 km). We generated metrics of deciduous and mixed forest fragmentation using Fragstats 4.4 implemented in ArcMap 10.3 and found, after adjusting for multicollinearity, that total forest edge within a 5 km buffer had a significant negative effect on tick density and that the proportion of forested land cover within a 10 km buffer was positively associated with density of I. scapularis nymphs. None of the 1 km fragmentation metrics were found to significantly improve the fit of the model. Elevation, previously associated with increased density of I. scapularis nymphs in Virginia, while significantly predictive in univariate analysis, was not an important driver of nymph density relative to fragmentation metrics. Our results suggest that amount of forest cover (i.e., lack of fragmentation) is the most important driver of I. scapularis density in our study system.

  7. Methodological comparison of active- and passive-driven oscillations in blood pressure; implications for the assessment of cerebral pressure-flow relationships.

    PubMed

    Smirl, Jonathan D; Hoffman, Keegan; Tzeng, Yu-Chieh; Hansen, Alex; Ainslie, Philip N

    2015-09-01

    We examined the between-day reproducibility of active (squat-stand maneuvers)- and passive [oscillatory lower-body negative pressure (OLBNP) maneuvers]-driven oscillations in blood pressure. These relationships were examined in both younger (n = 10; 25 ± 3 yr) and older (n = 9; 66 ± 4 yr) adults. Each testing protocol incorporated rest (5 min), followed by driven maneuvers at 0.05 (5 min) and 0.10 (5 min) Hz to increase blood-pressure variability and improve assessment of the pressure-flow dynamics using linear transfer function analysis. Beat-to-beat blood pressure, middle cerebral artery velocity, and end-tidal partial pressure of CO2 were monitored. The pressure-flow relationship was quantified in the very low (0.02-0.07 Hz) and low (0.07-0.20 Hz) frequencies (LF; spontaneous data) and at 0.05 and 0.10 Hz (driven maneuvers point estimates). Although there were no between-age differences, very few spontaneous and OLBNP transfer function metrics met the criteria for acceptable reproducibility, as reflected in a between-day, within-subject coefficient of variation (CoV) of <20%. Combined CoV data consist of LF coherence (15.1 ± 12.2%), LF gain (15.1 ± 12.2%), and LF normalized gain (18.5 ± 10.9%); OLBNP data consist of 0.05 (12.1 ± 15.%) and 0.10 (4.7 ± 7.8%) Hz coherence. In contrast, the squat-stand maneuvers revealed that all metrics (coherence: 0.6 ± 0.5 and 0.3 ± 0.5%; gain: 17.4 ± 12.3 and 12.7 ± 11.0%; normalized gain: 16.7 ± 10.9 and 15.7 ± 11.0%; and phase: 11.6 ± 10.2 and 17.3 ± 10.8%) at 0.05 and 0.10 Hz, respectively, were considered biologically acceptable for reproducibility. These findings have important implications for the reliable assessment and interpretation of cerebral pressure-flow dynamics in humans. Copyright © 2015 the American Physiological Society.

  8. Methodological comparison of active- and passive-driven oscillations in blood pressure; implications for the assessment of cerebral pressure-flow relationships

    PubMed Central

    Hoffman, Keegan; Tzeng, Yu-Chieh; Hansen, Alex; Ainslie, Philip N.

    2015-01-01

    We examined the between-day reproducibility of active (squat-stand maneuvers)- and passive [oscillatory lower-body negative pressure (OLBNP) maneuvers]-driven oscillations in blood pressure. These relationships were examined in both younger (n = 10; 25 ± 3 yr) and older (n = 9; 66 ± 4 yr) adults. Each testing protocol incorporated rest (5 min), followed by driven maneuvers at 0.05 (5 min) and 0.10 (5 min) Hz to increase blood-pressure variability and improve assessment of the pressure-flow dynamics using linear transfer function analysis. Beat-to-beat blood pressure, middle cerebral artery velocity, and end-tidal partial pressure of CO2 were monitored. The pressure-flow relationship was quantified in the very low (0.02-0.07 Hz) and low (0.07–0.20 Hz) frequencies (LF; spontaneous data) and at 0.05 and 0.10 Hz (driven maneuvers point estimates). Although there were no between-age differences, very few spontaneous and OLBNP transfer function metrics met the criteria for acceptable reproducibility, as reflected in a between-day, within-subject coefficient of variation (CoV) of <20%. Combined CoV data consist of LF coherence (15.1 ± 12.2%), LF gain (15.1 ± 12.2%), and LF normalized gain (18.5 ± 10.9%); OLBNP data consist of 0.05 (12.1 ± 15.%) and 0.10 (4.7 ± 7.8%) Hz coherence. In contrast, the squat-stand maneuvers revealed that all metrics (coherence: 0.6 ± 0.5 and 0.3 ± 0.5%; gain: 17.4 ± 12.3 and 12.7 ± 11.0%; normalized gain: 16.7 ± 10.9 and 15.7 ± 11.0%; and phase: 11.6 ± 10.2 and 17.3 ± 10.8%) at 0.05 and 0.10 Hz, respectively, were considered biologically acceptable for reproducibility. These findings have important implications for the reliable assessment and interpretation of cerebral pressure-flow dynamics in humans. PMID:26183476

  9. Nutrient density: addressing the challenge of obesity.

    PubMed

    Drewnowski, Adam

    2017-10-30

    Obesity rates are increasing worldwide. Potential reasons include excessive consumption of sugary beverages and energy-dense foods instead of more nutrient-rich options. On a per kJ basis, energy-dense grains, added sugars and fats cost less, whereas lean meats, seafood, leafy greens and whole fruit generally cost more. Given that consumer food choices are often driven by price, the observed social inequities in diet quality and health can be explained, in part, by nutrition economics. Achieving a nutrient-rich diet at an affordable cost has become progressively more difficult within the constraints of global food supply. However, given the necessary metrics and educational tools, it may be possible to eat better for less. New metrics of nutrient density help consumers identify foods, processed and unprocessed, that are nutrient-rich, affordable and appealing. Affordability metrics, created by adding food prices to food composition data, permit calculations of both kJ and nutrients per penny, allowing for new studies on the economic drivers of food choice. Merging dietary intake data with local or national food prices permits the estimation of individual-level diet costs. New metrics of nutrient balance can help identify those food patterns that provide optimal nutritional value. Behavioural factors, including cooking at home, have been associated with nutrition resilience, defined as healthier diets at lower cost. Studies of the energy and nutrient costs of the global food supply and diverse food patterns will permit a better understanding of the socioeconomic determinants of health. Dietary advice ought to be accompanied by economic feasibility studies.

  10. Twitter predicts citation rates of ecological research

    USGS Publications Warehouse

    Peoples, Brandon K.; Midway, Stephen R.; Sackett, Dana K.; Lynch, Abigail; Cooney, Patrick B.

    2016-01-01

    The relationship between traditional metrics of research impact (e.g., number of citations) and alternative metrics (altmetrics) such as Twitter activity are of great interest, but remain imprecisely quantified. We used generalized linear mixed modeling to estimate the relative effects of Twitter activity, journal impact factor, and time since publication on Web of Science citation rates of 1,599 primary research articles from 20 ecology journals published from 2012–2014. We found a strong positive relationship between Twitter activity (i.e., the number of unique tweets about an article) and number of citations. Twitter activity was a more important predictor of citation rates than 5-year journal impact factor. Moreover, Twitter activity was not driven by journal impact factor; the ‘highest-impact’ journals were not necessarily the most discussed online. The effect of Twitter activity was only about a fifth as strong as time since publication; accounting for this confounding factor was critical for estimating the true effects of Twitter use. Articles in impactful journals can become heavily cited, but articles in journals with lower impact factors can generate considerable Twitter activity and also become heavily cited. Authors may benefit from establishing a strong social media presence, but should not expect research to become highly cited solely through social media promotion. Our research demonstrates that altmetrics and traditional metrics can be closely related, but not identical. We suggest that both altmetrics and traditional citation rates can be useful metrics of research impact.

  11. Twitter Predicts Citation Rates of Ecological Research.

    PubMed

    Peoples, Brandon K; Midway, Stephen R; Sackett, Dana; Lynch, Abigail; Cooney, Patrick B

    2016-01-01

    The relationship between traditional metrics of research impact (e.g., number of citations) and alternative metrics (altmetrics) such as Twitter activity are of great interest, but remain imprecisely quantified. We used generalized linear mixed modeling to estimate the relative effects of Twitter activity, journal impact factor, and time since publication on Web of Science citation rates of 1,599 primary research articles from 20 ecology journals published from 2012-2014. We found a strong positive relationship between Twitter activity (i.e., the number of unique tweets about an article) and number of citations. Twitter activity was a more important predictor of citation rates than 5-year journal impact factor. Moreover, Twitter activity was not driven by journal impact factor; the 'highest-impact' journals were not necessarily the most discussed online. The effect of Twitter activity was only about a fifth as strong as time since publication; accounting for this confounding factor was critical for estimating the true effects of Twitter use. Articles in impactful journals can become heavily cited, but articles in journals with lower impact factors can generate considerable Twitter activity and also become heavily cited. Authors may benefit from establishing a strong social media presence, but should not expect research to become highly cited solely through social media promotion. Our research demonstrates that altmetrics and traditional metrics can be closely related, but not identical. We suggest that both altmetrics and traditional citation rates can be useful metrics of research impact.

  12. Modeling Hawaiian ecosystem degradation due to invasive plants under current and future climates

    USGS Publications Warehouse

    Vorsino, Adam E.; Fortini, Lucas B.; Amidon, Fred A.; Miller, Stephen E.; Jacobi, James D.; Price, Jonathan P.; `Ohukani`ohi`a Gon, Sam; Koob, Gregory A.

    2014-01-01

    Occupation of native ecosystems by invasive plant species alters their structure and/or function. In Hawaii, a subset of introduced plants is regarded as extremely harmful due to competitive ability, ecosystem modification, and biogeochemical habitat degradation. By controlling this subset of highly invasive ecosystem modifiers, conservation managers could significantly reduce native ecosystem degradation. To assess the invasibility of vulnerable native ecosystems, we selected a proxy subset of these invasive plants and developed robust ensemble species distribution models to define their respective potential distributions. The combinations of all species models using both binary and continuous habitat suitability projections resulted in estimates of species richness and diversity that were subsequently used to define an invasibility metric. The invasibility metric was defined from species distribution models with 0.8; True Skill Statistic >0.75) as evaluated per species. Invasibility was further projected onto a 2100 Hawaii regional climate change scenario to assess the change in potential habitat degradation. The distribution defined by the invasibility metric delineates areas of known and potential invasibility under current climate conditions and, when projected into the future, estimates potential reductions in native ecosystem extent due to climate-driven invasive incursion. We have provided the code used to develop these metrics to facilitate their wider use (Code S1). This work will help determine the vulnerability of native-dominated ecosystems to the combined threats of climate change and invasive species, and thus help prioritize ecosystem and species management actions.

  13. Plutonium inventories for stabilization and stabilized materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, A.K.

    1996-05-01

    The objective of the breakout session was to identify characteristics of materials containing plutonium, the need to stabilize these materials for storage, and plans to accomplish the stabilization activities. All current stabilization activities are driven by the Defense Nuclear Facilities Safety Board Recommendation 94-1 (May 26, 1994) and by the recently completed Plutonium ES&H Vulnerability Assessment (DOE-EH-0415). The Implementation Plan for accomplishing stabilization of plutonium-bearing residues in response to the Recommendation and the Assessment was published by DOE on February 28, 1995. This Implementation Plan (IP) commits to stabilizing problem materials within 3 years, and stabilizing all other materials withinmore » 8 years. The IP identifies approximately 20 metric tons of plutonium requiring stabilization and/or repackaging. A further breakdown shows this material to consist of 8.5 metric tons of plutonium metal and alloys, 5.5 metric tons of plutonium as oxide, and 6 metric tons of plutonium as residues. Stabilization of the metal and oxide categories containing greater than 50 weight percent plutonium is covered by DOE Standard {open_quotes}Criteria for Safe Storage of Plutonium Metals and Oxides{close_quotes} December, 1994 (DOE-STD-3013-94). This standard establishes criteria for safe storage of stabilized plutonium metals and oxides for up to 50 years. Each of the DOE sites and contractors with large plutonium inventories has either started or is preparing to start stabilization activities to meet these criteria.« less

  14. Data Driven Performance Evaluation of Wireless Sensor Networks

    PubMed Central

    Frery, Alejandro C.; Ramos, Heitor S.; Alencar-Neto, José; Nakamura, Eduardo; Loureiro, Antonio A. F.

    2010-01-01

    Wireless Sensor Networks are presented as devices for signal sampling and reconstruction. Within this framework, the qualitative and quantitative influence of (i) signal granularity, (ii) spatial distribution of sensors, (iii) sensors clustering, and (iv) signal reconstruction procedure are assessed. This is done by defining an error metric and performing a Monte Carlo experiment. It is shown that all these factors have significant impact on the quality of the reconstructed signal. The extent of such impact is quantitatively assessed. PMID:22294920

  15. Standardization of methods of expressing lengths and weights of fish

    USGS Publications Warehouse

    Hile, Ralph

    1948-01-01

    Fishery workers in the United States and Canada are unable to think readily in terms of the metric system of weights and measurements. Even long experience does not make it possible to form a clear idea as to the actual size of fish for which lengths and weights are given in metric units, without first converting to the English system. A more general adoption of the English system of weights and measurements in fishery work is recommended. The use of English units exclusively is suggested for articles of a popular or semi-popular nature, but in more formal publications the key information, at least, should be recorded in both systems. In highly technical papers metric units alone may prove satisfactory. Agreement is also lacking as to which length measurement of fish is suited best for uniform adoption. The total length is recommended here for the reason that it is the only measurement that includes all of the fish. This length is defined as the distance from the tip of the head (jaws closed) to the tip of the tail with the lobes compressed so as to give the maximum possible measurement.

  16. Identifying Potential Weapon Systems That Can Be Divested

    DTIC Science & Technology

    2016-04-08

    List of Figures Figure 1.1 – TACOM LCMC Sustainment Systems Technical Support (SSTS) Operation Maintenance Army (OMA)……………………………………………………………………….6...LCMC Sustainment Systems Technical Support (SSTS) Operation Maintenance Army (OMA) Requirements Tracking System (TORTS) process used to develop...Force operational concepts (Peltz, 2003). The Army’s ability to keep systems operational from a maintenance standpoint is driven by two factors

  17. Status and Trend of Automotive Power Packaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Zhenxian

    2012-01-01

    Comprehensive requirements in aspects of cost, reliability, efficiency, form factor, weight, and volume for power electronics modules in modern electric drive vehicles have driven the development of automotive power packaging technology intensively. Innovation in materials, interconnections, and processing techniques is leading to enormous improvements in power modules. In this paper, the technical development of and trends in power module packaging are evaluated by examining technical details with examples of industrial products. The issues and development directions for future automotive power module packaging are also discussed.

  18. Seismic Data Archive Quality Assurance -- Analytics Adding Value at Scale

    NASA Astrophysics Data System (ADS)

    Casey, R. E.; Ahern, T. K.; Sharer, G.; Templeton, M. E.; Weertman, B.; Keyson, L.

    2015-12-01

    Since the emergence of real-time delivery of seismic data over the last two decades, solutions for near-real-time quality analysis and station monitoring have been developed by data producers and data stewards. This has allowed for a nearly constant awareness of the quality of the incoming data and the general health of the instrumentation around the time of data capture. Modern quality assurance systems are evolving to provide ready access to a large variety of metrics, a rich and self-correcting history of measurements, and more importantly the ability to access these quality measurements en-masse through a programmatic interface.The MUSTANG project at the IRIS Data Management Center is working to achieve 'total archival data quality', where a large number of standardized metrics, some computationally expensive, are generated and stored for all data from decades past to the near present. To perform this on a 300 TB archive of compressed time series requires considerable resources in network I/O, disk storage, and CPU capacity to achieve scalability, not to mention the technical expertise to develop and maintain it. In addition, staff scientists are necessary to develop the system metrics and employ them to produce comprehensive and timely data quality reports to assist seismic network operators in maintaining their instrumentation. All of these metrics must be available to the scientist 24/7.We will present an overview of the MUSTANG architecture including the development of its standardized metrics code in R. We will show examples of the metrics values that we make publicly available to scientists and educators and show how we are sharing the algorithms used. We will also discuss the development of a capability that will enable scientific researchers to specify data quality constraints on their requests for data, providing only the data that is best suited to their area of study.

  19. DEVELOPMENT OF METRICS FOR TECHNICAL PRODUCTION: QUALIS BOOKS AND BOOK CHAPTERS.

    PubMed

    Ribas-Filho, Jurandir Marcondes; Malafaia, Osvaldo; Czeczko, Nicolau Gregori; Ribas, Carmen A P Marcondes; Nassif, Paulo Afonso Nunes

    2015-01-01

    To propose metrics to qualify the publication in books and chapters, and from there, establish guidance for the evaluation of the Medicine III programs. Analysis of some of the 2013 area documents focusing this issue. Were analyzed the following areas: Computer Science; Biotechnology; Biological Sciences I; Public Health; Medicine I. Except for the Medicine I, which has not adopted the metric for books and chapters, all other programs established metrics within the intellectual production, although with unequal percentages. It´s desirable to include metrics for books and book chapters in the intellectual production of post-graduate programs in Area Document with percentage-value of 5% in publications of Medicine III programs. Propor a métrica para qualificar a produção veiculada através de livros e capítulos e, a partir daí, estabelecer orientação para a avaliação dos programas de pós-graduação da Medicina III. Análise dos documentos de área de 2013 dos programas de pós-graduação senso estrito das áreas: Ciência da Computação; Biotecnologia; Ciências Biológicas I; Saúde Coletiva; Medicina I. Excetuando-se o programa da Medicina I, que não adotou a métrica para classificação de livros e capítulos, todos os demais estabeleceram-na dentro da sua produção intelectual, embora com percentuais desiguais. É desejável inserir a métrica de livros e capitulos de livros na produção intelectual do Documento de Área dos programas, ortorgando a ela percentual de 5% das publicações qualificadas dos programas da Medicina III.

  20. Implementing assessments of robot-assisted technical skill in urological education: a systematic review and synthesis of the validity evidence.

    PubMed

    Goldenberg, Mitchell G; Lee, Jason Y; Kwong, Jethro C C; Grantcharov, Teodor P; Costello, Anthony

    2018-03-31

    To systematically review and synthesise the validity evidence supporting intraoperative and simulation-based assessments of technical skill in urological robot-assisted surgery (RAS), and make evidence-based recommendations for the implementation of these assessments in urological training. A literature search of the Medline, PsycINFO and Embase databases was performed. Articles using technical skill and simulation-based assessments in RAS were abstracted. Only studies involving urology trainees or faculty were included in the final analysis. Multiple tools for the assessment of technical robotic skill have been published, with mixed sources of validity evidence to support their use. These evaluations have been used in both the ex vivo and in vivo settings. Performance evaluations range from global rating scales to psychometrics, and assessments are carried out through automation, expert analysts, and crowdsourcing. There have been rapid expansions in approaches to RAS technical skills assessment, both in simulated and clinical settings. Alternative approaches to assessment in RAS, such as crowdsourcing and psychometrics, remain under investigation. Evidence to support the use of these metrics in high-stakes decisions is likely insufficient at present. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.

  1. Cloud-based Computing and Applications of New Snow Metrics for Societal Benefit

    NASA Astrophysics Data System (ADS)

    Nolin, A. W.; Sproles, E. A.; Crumley, R. L.; Wilson, A.; Mar, E.; van de Kerk, M.; Prugh, L.

    2017-12-01

    Seasonal and interannual variability in snow cover affects socio-environmental systems including water resources, forest ecology, freshwater and terrestrial habitat, and winter recreation. We have developed two new seasonal snow metrics: snow cover frequency (SCF) and snow disappearance date (SDD). These metrics are calculated at 500-m resolution using NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover data (MOD10A1). SCF is the number of times snow is observed in a pixel over the user-defined observation period. SDD is the last date of observed snow in a water year. These pixel-level metrics are calculated rapidly and globally in the Google Earth Engine cloud-based environment. SCF and SDD can be interactively visualized in a map-based interface, allowing users to explore spatial and temporal snowcover patterns from 2000-present. These metrics are especially valuable in regions where snow data are sparse or non-existent. We have used these metrics in several ongoing projects. When SCF was linked with a simple hydrologic model in the La Laguna watershed in northern Chile, it successfully predicted summer low flows with a Nash-Sutcliffe value of 0.86. SCF has also been used to help explain changes in Dall sheep populations in Alaska where sheep populations are negatively impacted by late snow cover and low snowline elevation during the spring lambing season. In forest management, SCF and SDD appear to be valuable predictors of post-wildfire vegetation growth. We see a positive relationship between winter SCF and subsequent summer greening for several years post-fire. For western US winter recreation, we are exploring trends in SDD and SCF for regions where snow sports are economically important. In a world with declining snowpacks and increasing uncertainty, these metrics extend across elevations and fill data gaps to provide valuable information for decision-making. SCF and SDD are being produced so that anyone with Internet access and a Google account can access, visualize, and download the data with a minimum of technical expertise and no need for proprietary software.

  2. Fiber Based Optical Amplifier for High Energy Laser Pulses Final Report CRADA No. TC02100.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messerly, M.; Cunningham, P.

    This was a collaborative effort between Lawrence Livermore National Security, LLC (formerly The Regents of the University of California)/Lawrence Livermore National Laboratory (LLNL), and The Boeing Company to develop an optical fiber-based laser amplifier capable of producing and sustaining very high-energy, nanosecond-scale optical pulses. The overall technical objective of this CRADA was to research, design, and develop an optical fiber-based amplifier that would meet specific metrics.

  3. Business Case Analysis of the Towed Gilder Air Launched System (TGALS)

    NASA Technical Reports Server (NTRS)

    Webb, Darryl W.; Nguyen, McLinton B.; Seibold, Robert W.; Wong, Frank C.; Budd, Gerald D.

    2017-01-01

    The Aerospace Corporation developed an integrated Business Case Analysis (BCA) model on behalf of the NASA Armstrong Flight Research Center (AFRC). This model evaluated the potential profitability of the Towed Glider Air Launched System (TGALS) concept, under development at AFRC, identifying potential technical, programmatic, and business decisions that could improve its business viability. The model addressed system performance metrics; development, production and operation cost estimates; market size and product service positioning; pricing alternatives; and market share.

  4. Mixture model normalization for non-targeted gas chromatography/mass spectrometry metabolomics data.

    PubMed

    Reisetter, Anna C; Muehlbauer, Michael J; Bain, James R; Nodzenski, Michael; Stevens, Robert D; Ilkayeva, Olga; Metzger, Boyd E; Newgard, Christopher B; Lowe, William L; Scholtens, Denise M

    2017-02-02

    Metabolomics offers a unique integrative perspective for health research, reflecting genetic and environmental contributions to disease-related phenotypes. Identifying robust associations in population-based or large-scale clinical studies demands large numbers of subjects and therefore sample batching for gas-chromatography/mass spectrometry (GC/MS) non-targeted assays. When run over weeks or months, technical noise due to batch and run-order threatens data interpretability. Application of existing normalization methods to metabolomics is challenged by unsatisfied modeling assumptions and, notably, failure to address batch-specific truncation of low abundance compounds. To curtail technical noise and make GC/MS metabolomics data amenable to analyses describing biologically relevant variability, we propose mixture model normalization (mixnorm) that accommodates truncated data and estimates per-metabolite batch and run-order effects using quality control samples. Mixnorm outperforms other approaches across many metrics, including improved correlation of non-targeted and targeted measurements and superior performance when metabolite detectability varies according to batch. For some metrics, particularly when truncation is less frequent for a metabolite, mean centering and median scaling demonstrate comparable performance to mixnorm. When quality control samples are systematically included in batches, mixnorm is uniquely suited to normalizing non-targeted GC/MS metabolomics data due to explicit accommodation of batch effects, run order and varying thresholds of detectability. Especially in large-scale studies, normalization is crucial for drawing accurate conclusions from non-targeted GC/MS metabolomics data.

  5. Use of the Nintendo Wii Balance Board for Studying Standing Static Balance Control: Technical Considerations, Force-Plate Congruency, and the Effect of Battery Life.

    PubMed

    Weaver, Tyler B; Ma, Christine; Laing, Andrew C

    2017-02-01

    The Nintendo Wii Balance Board (WBB) has become popular as a low-cost alternative to research-grade force plates. The purposes of this study were to characterize a series of technical specifications for the WBB, to compare balance control metrics derived from time-varying center of pressure (COP) signals collected simultaneously from a WBB and a research-grade force plate, and to investigate the effects of battery life. Drift, linearity, hysteresis, mass accuracy, uniformity of response, and COP accuracy were assessed from a WBB. In addition, 6 participants completed an eyes-closed quiet standing task on the WBB (at 3 battery life levels) mounted on a force plate while sway was simultaneously measured by both systems. Characterization results were all associated with less than 1% error. R 2 values reflecting WBB sensor linearity were > .99. Known and measured COP differences were lowest at the center of the WBB and greatest at the corners. Between-device differences in quiet stance COP summary metrics were of limited clinical significance. Lastly, battery life did not affect WBB COP accuracy, but did influence 2 of 8 quiet stance WBB parameters. This study provides general support for the WBB as a low-cost alternative to research-grade force plates for quantifying COP movement during standing.

  6. Development of Technology Transfer Economic Growth Metrics

    NASA Technical Reports Server (NTRS)

    Mastrangelo, Christina M.

    1998-01-01

    The primary objective of this project is to determine the feasibility of producing technology transfer metrics that answer the question: Do NASA/MSFC technical assistance activities impact economic growth? The data for this project resides in a 7800-record database maintained by Tec-Masters, Incorporated. The technology assistance data results from survey responses from companies and individuals who have interacted with NASA via a Technology Transfer Agreement, or TTA. The goal of this project was to determine if the existing data could provide indications of increased wealth. This work demonstrates that there is evidence that companies that used NASA technology transfer have a higher job growth rate than the rest of the economy. It also shows that the jobs being supported are jobs in higher wage SIC codes, and this indicates improvements in personal wealth. Finally, this work suggests that with correct data, the wealth issue may be addressed.

  7. Barriers to the implementation of green chemistry in the United States.

    PubMed

    Matus, Kira J M; Clark, William C; Anastas, Paul T; Zimmerman, Julie B

    2012-10-16

    This paper investigates the conditions under which firms are able to develop and implement innovations with sustainable development benefits. In particular, we examine "green chemistry" innovations in the United States. Via interviews with green chemistry leaders from industry, academia, nongovernmental institutions (NGOs), and government, we identified six major categories of challenges commonly confronted by innovators: (1) economic and financial, (2) regulatory, (3) technical, (4) organizational, (5) cultural, and (6) definition and metrics. Further analysis of these barriers shows that in the United States, two elements of these that are particular to the implementation of green chemistry innovations are the absence of clear definitions and metrics for use by researchers and decision makers, as well as the interdisciplinary demands of these innovations on researchers and management. Finally, we conclude with some of the strategies that have been successful thus far in overcoming these barriers, and the types of policies which could have positive impacts moving forward.

  8. Feeling lucky? Using search engines to assess perceptions of urban sustainability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keirstead, James

    2009-02-15

    The sustainability of urban environments is an important issue at both local and international scales. Indicators are frequently used by decision-makers seeking to improve urban performance but these metrics can be dependent on sparse quantitative data. This paper explores the potential of an alternative approach, using an internet search engine to quickly gather qualitative data on the key attributes of cities. The method is applied to 21 world cities and the results indicate that, while the technique does shed light on direct and indirect aspects of sustainability, the validity of derived metrics as objective indicators of long-term sustainability is questionable.more » However the method's ability to provide subjective short-term assessments is more promising and it could therefore play an important role in participatory policy exercises such as public consultations. A number of promising technical improvements to the method's performance are also highlighted.« less

  9. What do we know and when do we know it?

    NASA Astrophysics Data System (ADS)

    Nicholls, Anthony

    2008-03-01

    Two essential aspects of virtual screening are considered: experimental design and performance metrics. In the design of any retrospective virtual screen, choices have to be made as to the purpose of the exercise. Is the goal to compare methods? Is the interest in a particular type of target or all targets? Are we simulating a `real-world' setting, or teasing out distinguishing features of a method? What are the confidence limits for the results? What should be reported in a publication? In particular, what criteria should be used to decide between different performance metrics? Comparing the field of molecular modeling to other endeavors, such as medical statistics, criminology, or computer hardware evaluation indicates some clear directions. Taken together these suggest the modeling field has a long way to go to provide effective assessment of its approaches, either to itself or to a broader audience, but that there are no technical reasons why progress cannot be made.

  10. Taming the nonlinearity of the Einstein equation.

    PubMed

    Harte, Abraham I

    2014-12-31

    Many of the technical complications associated with the general theory of relativity ultimately stem from the nonlinearity of Einstein's equation. It is shown here that an appropriate choice of dynamical variables may be used to eliminate all such nonlinearities beyond a particular order: Both Landau-Lifshitz and tetrad formulations of Einstein's equation are obtained that involve only finite products of the unknowns and their derivatives. Considerable additional simplifications arise in physically interesting cases where metrics become approximately Kerr or, e.g., plane waves, suggesting that the variables described here can be used to efficiently reformulate perturbation theory in a variety of contexts. In all cases, these variables are shown to have simple geometrical interpretations that directly relate the local causal structure associated with the metric of interest to the causal structure associated with a prescribed background. A new method to search for exact solutions is outlined as well.

  11. Efficient GIS-based model-driven method for flood risk management and its application in central China

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Zhou, J.; Song, L.; Zou, Q.; Guo, J.; Wang, Y.

    2014-02-01

    In recent years, an important development in flood management has been the focal shift from flood protection towards flood risk management. This change greatly promoted the progress of flood control research in a multidisciplinary way. Moreover, given the growing complexity and uncertainty in many decision situations of flood risk management, traditional methods, e.g., tight-coupling integration of one or more quantitative models, are not enough to provide decision support for managers. Within this context, this paper presents a beneficial methodological framework to enhance the effectiveness of decision support systems, through the dynamic adaptation of support regarding the needs of the decision-maker. In addition, we illustrate a loose-coupling technical prototype for integrating heterogeneous elements, such as multi-source data, multidisciplinary models, GIS tools and existing systems. The main innovation is the application of model-driven concepts, which put the system in a state of continuous iterative optimization. We define the new system as a model-driven decision support system (MDSS ). Two characteristics that differentiate the MDSS are as follows: (1) it is made accessible to non-technical specialists; and (2) it has a higher level of adaptability and compatibility. Furthermore, the MDSS was employed to manage the flood risk in the Jingjiang flood diversion area, located in central China near the Yangtze River. Compared with traditional solutions, we believe that this model-driven method is efficient, adaptable and flexible, and thus has bright prospects of application for comprehensive flood risk management.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jansen, F.

    The use of integrated PET/MRI systems in clinical applications can best benefit from understanding their technological advances and limitations. The currently available clinical PET/MRI systems have their own characteristics. Thorough analyses of existing technical data and evaluation of necessary performance metrics for quality assurances could be conducted to optimize application-specific PET/MRI protocols. This Symposium will focus on technical advances and limitations of clinical PET/MRI systems, and how this exciting imaging modality can be utilized in applications that can benefit from both PET and MRI. Learning Objectives: To understand the technological advances of clinical PET/MRI systems To correctly identify clinical applicationsmore » that can benefit from PET/MRI To understand ongoing work to further improve the current PET/MRI technology Floris Jansen is a GE Healthcare employee.« less

  13. Automated grading of lumbar disc degeneration via supervised distance metric learning

    NASA Astrophysics Data System (ADS)

    He, Xiaoxu; Landis, Mark; Leung, Stephanie; Warrington, James; Shmuilovich, Olga; Li, Shuo

    2017-03-01

    Lumbar disc degeneration (LDD) is a commonly age-associated condition related to low back pain, while its consequences are responsible for over 90% of spine surgical procedures. In clinical practice, grading of LDD by inspecting MRI is a necessary step to make a suitable treatment plan. This step purely relies on physicians manual inspection so that it brings the unbearable tediousness and inefficiency. An automated method for grading of LDD is highly desirable. However, the technical implementation faces a big challenge from class ambiguity, which is typical in medical image classification problems with a large number of classes. This typical challenge is derived from the complexity and diversity of medical images, which lead to a serious class overlapping and brings a great challenge in discriminating different classes. To solve this problem, we proposed an automated grading approach, which is based on supervised distance metric learning to classify the input discs into four class labels (0: normal, 1: slight, 2: marked, 3: severe). By learning distance metrics from labeled instances, an optimal distance metric is modeled and with two attractive advantages: (1) keeps images from the same classes close, and (2) keeps images from different classes far apart. The experiments, performed in 93 subjects, demonstrated the superiority of our method with accuracy 0.9226, sensitivity 0.9655, specificity 0.9083, F-score 0.8615. With our approach, physicians will be free from the tediousness and patients will be provided an effective treatment.

  14. Embedding Agile Practices within a Plan-Driven Hierarchical Project Life Cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Millard, W. David; Johnson, Daniel M.; Henderson, John M.

    2014-07-28

    Organizations use structured, plan-driven approaches to provide continuity, direction, and control to large, multi-year programs. Projects within these programs vary greatly in size, complexity, level of maturity, technical risk, and clarity of the development objectives. Organizations that perform exploratory research, evolutionary development, and other R&D activities can obtain the benefits of Agile practices without losing the benefits of their program’s overarching plan-driven structure. This paper describes application of Agile development methods on a large plan-driven sensor integration program. While the client employed plan-driven, requirements flow-down methodologies, tight project schedules and complex interfaces called for frequent end-to-end demonstrations to provide feedbackmore » during system development. The development process maintained the many benefits of plan-driven project execution with the rapid prototyping, integration, demonstration, and client feedback possible through Agile development methods. This paper also describes some of the tools and implementing mechanisms used to transition between and take advantage of each methodology, and presents lessons learned from the project management, system engineering, and developer’s perspectives.« less

  15. Large-scale distribution patterns of mangrove nematodes: A global meta-analysis.

    PubMed

    Brustolin, Marco C; Nagelkerken, Ivan; Fonseca, Gustavo

    2018-05-01

    Mangroves harbor diverse invertebrate communities, suggesting that macroecological distribution patterns of habitat-forming foundation species drive the associated faunal distribution. Whether these are driven by mangrove biogeography is still ambiguous. For small-bodied taxa, local factors and landscape metrics might be as important as macroecology. We performed a meta-analysis to address the following questions: (1) can richness of mangrove trees explain macroecological patterns of nematode richness? and (2) do local landscape attributes have equal or higher importance than biogeography in structuring nematode richness? Mangrove areas of Caribbean-Southwest Atlantic, Western Indian, Central Indo-Pacific, and Southwest Pacific biogeographic regions. We used random-effects meta-analyses based on natural logarithm of the response ratio (lnRR) to assess the importance of macroecology (i.e., biogeographic regions, latitude, longitude), local factors (i.e., aboveground mangrove biomass and tree richness), and landscape metrics (forest area and shape) in structuring nematode richness from 34 mangroves sites around the world. Latitude, mangrove forest area, and forest shape index explained 19% of the heterogeneity across studies. Richness was higher at low latitudes, closer to the equator. At local scales, richness increased slightly with landscape complexity and decreased with forest shape index. Our results contrast with biogeographic diversity patterns of mangrove-associated taxa. Global-scale nematode diversity may have evolved independently of mangrove tree richness, and diversity of small-bodied metazoans is probably more closely driven by latitude and associated climates, rather than local, landscape, or global biogeographic patterns.

  16. Evaluating Model-Driven Development for large-scale EHRs through the openEHR approach.

    PubMed

    Christensen, Bente; Ellingsen, Gunnar

    2016-05-01

    In healthcare, the openEHR standard is a promising Model-Driven Development (MDD) approach for electronic healthcare records. This paper aims to identify key socio-technical challenges when the openEHR approach is put to use in Norwegian hospitals. More specifically, key fundamental assumptions are investigated empirically. These assumptions promise a clear separation of technical and domain concerns, users being in control of the modelling process, and widespread user commitment. Finally, these assumptions promise an easy way to model and map complex organizations. This longitudinal case study is based on an interpretive approach, whereby data were gathered through 440h of participant observation, 22 semi-structured interviews and extensive document studies over 4 years. The separation of clinical and technical concerns seemed to be aspirational, because both designing the technical system and modelling the domain required technical and clinical competence. Hence developers and clinicians found themselves working together in both arenas. User control and user commitment seemed not to apply in large-scale projects, as modelling the domain turned out to be too complicated and hence to appeal only to especially interested users worldwide, not the local end-users. Modelling proved to be a complex standardization process that shaped both the actual modelling and healthcare practice itself. A broad assemblage of contributors seems to be needed for developing an archetype-based system, in which roles, responsibilities and contributions cannot be clearly defined and delimited. The way MDD occurs has implications for medical practice per se in the form of the need to standardize practices to ensure that medical concepts are uniform across practices. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. A Verification-Driven Approach to Control Analysis and Tuning

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2008-01-01

    This paper proposes a methodology for the analysis and tuning of controllers using control verification metrics. These metrics, which are introduced in a companion paper, measure the size of the largest uncertainty set of a given class for which the closed-loop specifications are satisfied. This framework integrates deterministic and probabilistic uncertainty models into a setting that enables the deformation of sets in the parameter space, the control design space, and in the union of these two spaces. In regard to control analysis, we propose strategies that enable bounding regions of the design space where the specifications are satisfied by all the closed-loop systems associated with a prescribed uncertainty set. When this is unfeasible, we bound regions where the probability of satisfying the requirements exceeds a prescribed value. In regard to control tuning, we propose strategies for the improvement of the robust characteristics of a baseline controller. Some of these strategies use multi-point approximations to the control verification metrics in order to alleviate the numerical burden of solving a min-max problem. Since this methodology targets non-linear systems having an arbitrary, possibly implicit, functional dependency on the uncertain parameters and for which high-fidelity simulations are available, they are applicable to realistic engineering problems..

  18. A Flexible and Non-instrusive Approach for Computing Complex Structural Coverage Metrics

    NASA Technical Reports Server (NTRS)

    Whalen, Michael W.; Person, Suzette J.; Rungta, Neha; Staats, Matt; Grijincu, Daniela

    2015-01-01

    Software analysis tools and techniques often leverage structural code coverage information to reason about the dynamic behavior of software. Existing techniques instrument the code with the required structural obligations and then monitor the execution of the compiled code to report coverage. Instrumentation based approaches often incur considerable runtime overhead for complex structural coverage metrics such as Modified Condition/Decision (MC/DC). Code instrumentation, in general, has to be approached with great care to ensure it does not modify the behavior of the original code. Furthermore, instrumented code cannot be used in conjunction with other analyses that reason about the structure and semantics of the code under test. In this work, we introduce a non-intrusive preprocessing approach for computing structural coverage information. It uses a static partial evaluation of the decisions in the source code and a source-to-bytecode mapping to generate the information necessary to efficiently track structural coverage metrics during execution. Our technique is flexible; the results of the preprocessing can be used by a variety of coverage-driven software analysis tasks, including automated analyses that are not possible for instrumented code. Experimental results in the context of symbolic execution show the efficiency and flexibility of our nonintrusive approach for computing code coverage information

  19. A bibliometric analysis of digestive health research in Canada.

    PubMed

    Tuitt, Desiree; Knight, Frank; Lipman, Tara

    2011-11-01

    Measurement of the impact and influence of medical⁄scientific journals, and of individual researchers has become more widely practiced in recent decades. This is driven, in part, by the increased availability of data regarding citations of research articles, and by increased competition for research funding. Digestive disease research has been identified as a particularly strong discipline in Canada. The authors collected quantitative data on the impact and influence of Canadian digestive health research. The present study involved an analysis of the research impact (Hirsch factor) and research influence (Influence factor) of 106 digestive health researchers in Canada. Rankings of the top 25 researchers on the basis of the two metrics were dominated by the larger research groups at the University of Toronto (Toronto, Ontario), McMaster University (Hamilton, Ontario), and the Universities of Calgary (Calgary, Alberta) and Alberta (Edmonton, Alberta), but with representation by other research groups at the Universities of Manitoba (Winnipeg, Manitoba), Western Ontario (London, Ontario) and McGill University (Montreal, Quebec). Female and male researchers had similar scores for the two metrics, as did basic scientists versus clinical investigators. Strategic recruitment, particularly of established investigators, can have a major impact on the ranking of research groups. Comparing these metrics over different time frames can provide insights into the vulnerabilities and strengths of research groups.

  20. Article-level assessment of influence and translation in biomedical research.

    PubMed

    Santangelo, George M

    2017-06-01

    Given the vast scale of the modern scientific enterprise, it can be difficult for scientists to make judgments about the work of others through careful analysis of the entirety of the relevant literature. This has led to a reliance on metrics that are mathematically flawed and insufficiently diverse to account for the variety of ways in which investigators contribute to scientific progress. An urgent, critical first step in solving this problem is replacing the Journal Impact Factor with an article-level alternative. The Relative Citation Ratio (RCR), a metric that was designed to serve in that capacity, measures the influence of each publication on its respective area of research. RCR can serve as one component of a multifaceted metric that provides an effective data-driven supplement to expert opinion. Developing validated methods that quantify scientific progress can help to optimize the management of research investments and accelerate the acquisition of knowledge that improves human health. © 2017 Santangelo. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  1. Quantifying urban growth patterns in Hanoi using landscape expansion modes and time series spatial metrics.

    PubMed

    Nong, Duong H; Lepczyk, Christopher A; Miura, Tomoaki; Fox, Jefferson M

    2018-01-01

    Urbanization has been driven by various social, economic, and political factors around the world for centuries. Because urbanization continues unabated in many places, it is crucial to understand patterns of urbanization and their potential ecological and environmental impacts. Given this need, the objectives of our study were to quantify urban growth rates, growth modes, and resultant changes in the landscape pattern of urbanization in Hanoi, Vietnam from 1993 to 2010 and to evaluate the extent to which the process of urban growth in Hanoi conformed to the diffusion-coalescence theory. We analyzed the spatiotemporal patterns and dynamics of the built-up land in Hanoi using landscape expansion modes, spatial metrics, and a gradient approach. Urbanization was most pronounced in the periods of 2001-2006 and 2006-2010 at a distance of 10 to 35 km around the urban center. Over the 17 year period urban expansion in Hanoi was dominated by infilling and edge expansion growth modes. Our findings support the diffusion-coalescence theory of urbanization. The shift of the urban growth areas over time and the dynamic nature of the spatial metrics revealed important information about our understanding of the urban growth process and cycle. Furthermore, our findings can be used to evaluate urban planning policies and aid in urbanization issues in rapidly urbanizing countries.

  2. The ISACA Business Model for Information Security: An Integrative and Innovative Approach

    NASA Astrophysics Data System (ADS)

    von Roessing, Rolf

    In recent years, information security management has matured into a professional discipline that covers both technical and managerial aspects in an organisational environment. Information security is increasingly dependent on business-driven parameters and interfaces to a variety of organisational units and departments. In contrast, common security models and frameworks have remained largely technical. A review of extant models ranging from [LaBe73] to more recent models shows that technical aspects are covered in great detail, while the managerial aspects of security are often neglected.Likewise, the business view on organisational security is frequently at odds with the demands of information security personnel or information technology management. In practice, senior and executive level management remain comparatively distant from technical requirements. As a result, information security is generally regarded as a cost factor rather than a benefit to the organisation.

  3. The Structural Consequences of Big Data-Driven Education.

    PubMed

    Zeide, Elana

    2017-06-01

    Educators and commenters who evaluate big data-driven learning environments focus on specific questions: whether automated education platforms improve learning outcomes, invade student privacy, and promote equality. This article puts aside separate unresolved-and perhaps unresolvable-issues regarding the concrete effects of specific technologies. It instead examines how big data-driven tools alter the structure of schools' pedagogical decision-making, and, in doing so, change fundamental aspects of America's education enterprise. Technological mediation and data-driven decision-making have a particularly significant impact in learning environments because the education process primarily consists of dynamic information exchange. In this overview, I highlight three significant structural shifts that accompany school reliance on data-driven instructional platforms that perform core school functions: teaching, assessment, and credentialing. First, virtual learning environments create information technology infrastructures featuring constant data collection, continuous algorithmic assessment, and possibly infinite record retention. This undermines the traditional intellectual privacy and safety of classrooms. Second, these systems displace pedagogical decision-making from educators serving public interests to private, often for-profit, technology providers. They constrain teachers' academic autonomy, obscure student evaluation, and reduce parents' and students' ability to participate or challenge education decision-making. Third, big data-driven tools define what "counts" as education by mapping the concepts, creating the content, determining the metrics, and setting desired learning outcomes of instruction. These shifts cede important decision-making to private entities without public scrutiny or pedagogical examination. In contrast to the public and heated debates that accompany textbook choices, schools often adopt education technologies ad hoc. Given education's crucial impact on individual and collective success, educators and policymakers must consider the implications of data-driven education proactively and explicitly.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Overview of NREL's work in Alaska. NREL provides objective, data-driven support to aid decision-makers in Alaska as they deploy advanced energy technologies and reduce energy burdens across the nation's largest state. NREL's technical assistance, research, and outreach activities are providing the catalyst for transforming the way Alaska uses energy.

  5. 16 CFR 1407.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... GENERATORS: REQUIREMENTS TO PROVIDE PERFORMANCE AND TECHNICAL DATA BY LABELING § 1407.2 Definitions. (a) The... portable generator is an internal combustion engine-driven electric generator rated no higher than 15..., and may have alternating- or direct-current (DC) sections for supplying energy to battery charging...

  6. Embedded data collector (EDC) evaluation, phase II - comparison with instrumented static load tests : [technical summary].

    DOT National Transportation Integrated Search

    2013-12-01

    Monitoring installation of driven pile foundations : is critically important to ensure adequate safety : of structures with piles, such as the many bridges : which are maintained by the Florida Department : of Transportation (FDOT). Dynamic load test...

  7. CURRENT TECHNICAL PROBLEMS IN EMERGY ANALYSIS

    EPA Science Inventory

    : Emergy Analysis has been a rapidly evolving assessment methodology for the past 30 years. This process of development was primarily driven by the inquiring mind and ceaseless activity of its founder, H.T. Odum, his students, and colleagues. Historically, as new kinds of proble...

  8. Personalized mortality prediction driven by electronic medical data and a patient similarity metric.

    PubMed

    Lee, Joon; Maslove, David M; Dubin, Joel A

    2015-01-01

    Clinical outcome prediction normally employs static, one-size-fits-all models that perform well for the average patient but are sub-optimal for individual patients with unique characteristics. In the era of digital healthcare, it is feasible to dynamically personalize decision support by identifying and analyzing similar past patients, in a way that is analogous to personalized product recommendation in e-commerce. Our objectives were: 1) to prove that analyzing only similar patients leads to better outcome prediction performance than analyzing all available patients, and 2) to characterize the trade-off between training data size and the degree of similarity between the training data and the index patient for whom prediction is to be made. We deployed a cosine-similarity-based patient similarity metric (PSM) to an intensive care unit (ICU) database to identify patients that are most similar to each patient and subsequently to custom-build 30-day mortality prediction models. Rich clinical and administrative data from the first day in the ICU from 17,152 adult ICU admissions were analyzed. The results confirmed that using data from only a small subset of most similar patients for training improves predictive performance in comparison with using data from all available patients. The results also showed that when too few similar patients are used for training, predictive performance degrades due to the effects of small sample sizes. Our PSM-based approach outperformed well-known ICU severity of illness scores. Although the improved prediction performance is achieved at the cost of increased computational burden, Big Data technologies can help realize personalized data-driven decision support at the point of care. The present study provides crucial empirical evidence for the promising potential of personalized data-driven decision support systems. With the increasing adoption of electronic medical record (EMR) systems, our novel medical data analytics contributes to meaningful use of EMR data.

  9. Personalized Mortality Prediction Driven by Electronic Medical Data and a Patient Similarity Metric

    PubMed Central

    Lee, Joon; Maslove, David M.; Dubin, Joel A.

    2015-01-01

    Background Clinical outcome prediction normally employs static, one-size-fits-all models that perform well for the average patient but are sub-optimal for individual patients with unique characteristics. In the era of digital healthcare, it is feasible to dynamically personalize decision support by identifying and analyzing similar past patients, in a way that is analogous to personalized product recommendation in e-commerce. Our objectives were: 1) to prove that analyzing only similar patients leads to better outcome prediction performance than analyzing all available patients, and 2) to characterize the trade-off between training data size and the degree of similarity between the training data and the index patient for whom prediction is to be made. Methods and Findings We deployed a cosine-similarity-based patient similarity metric (PSM) to an intensive care unit (ICU) database to identify patients that are most similar to each patient and subsequently to custom-build 30-day mortality prediction models. Rich clinical and administrative data from the first day in the ICU from 17,152 adult ICU admissions were analyzed. The results confirmed that using data from only a small subset of most similar patients for training improves predictive performance in comparison with using data from all available patients. The results also showed that when too few similar patients are used for training, predictive performance degrades due to the effects of small sample sizes. Our PSM-based approach outperformed well-known ICU severity of illness scores. Although the improved prediction performance is achieved at the cost of increased computational burden, Big Data technologies can help realize personalized data-driven decision support at the point of care. Conclusions The present study provides crucial empirical evidence for the promising potential of personalized data-driven decision support systems. With the increasing adoption of electronic medical record (EMR) systems, our novel medical data analytics contributes to meaningful use of EMR data. PMID:25978419

  10. Subsonic aircraft: Evolution and the matching of size to performance

    NASA Technical Reports Server (NTRS)

    Loftin, L. K., Jr.

    1980-01-01

    Methods for estimating the approximate size, weight, and power of aircraft intended to meet specified performance requirements are presented for both jet-powered and propeller-driven aircraft. The methods are simple and require only the use of a pocket computer for rapid application to specific sizing problems. Application of the methods is illustrated by means of sizing studies of a series of jet-powered and propeller-driven aircraft with varying design constraints. Some aspects of the technical evolution of the airplane from 1918 to the present are also briefly discussed.

  11. Engineering concepts for the placement of wastes on the abyssal seafloor

    NASA Astrophysics Data System (ADS)

    Valent, Philip J.; Palowitch, Andrew W.; Young, David K.

    1998-05-01

    The Naval Research Laboratory (NRL), with industry and academic participation, has completed a study of the concept of isolating industrial wastes (i.e., sewage sludge, fly ash from municipal incinerators, and dredged material) on the abyssal seafloor. This paper presents results of the technical and economic assessment of this waste management concept. The results of the environmental impacts portion of the study are presented in a companion paper. The technical assessment began with identification of 128 patents addressing waste disposal in the ocean. From these 128 patents, five methods for transporting wastes through the water column and emplacing wastes within an easily monitored area on the abyssal seafloor were synthesized for technical assessment. In one method waste is lowered to the seafloor in a bucket of 190 m 3. In a second method waste is pumped down to the seafloor in pipes, 1.37 m in diameter and 6100 m in length. In a third method waste is free-fallen from the ocean surface in 380-m 3 geosynthetic fabric containers (GFCs). In the fourth and fifth methods, waste is carried to near the seafloor in GFCs transported in (a) a 20,000 metric ton displacement (loaded), unpowered, unmanned submersible glider, or (b) a 2085 metric ton displacement (loaded) disk-shaped transporter traversing to and from the seafloor much like an untethered elevator. In the last two methods the transporter releases the GFCs to free-fall the last few hundred meters to the seafloor. Two reliability analyses, a Fault Tree Analysis (FTA), and a Failure Modes, Effects, and Criticality Analysis (FMECA), showed that the free-fall GFC method posed the least overall relative risk, provided that fabric container and transporter designs eliminate the potential for tearing of the containers on release from the surface transporter. Of the five methods, the three GFC methods were shown to offer cost-effective waste management options when compared with present-day waste management techniques in higher-priced areas, such as the New York-New Jersey area. In conclusion, the abyssal seafloor waste isolation concept is technically feasible and cost-effective for many waste sources.

  12. Hospital-Based Clinical Pharmacy Services to Improve Ambulatory Management of Chronic Obstructive Pulmonary Disease

    PubMed Central

    Smith, Amber Lanae; Palmer, Valerie; Farhat, Nada; Kalus, James S.; Thavarajah, Krishna; DiGiovine, Bruno; MacDonald, Nancy C.

    2016-01-01

    Background: No systematic evaluations of a comprehensive clinical pharmacy process measures currently exist to determine an optimal ambulatory care collaboration model for chronic obstructive pulmonary disease (COPD) patients. Objective: Describe the impact of a pharmacist-provided clinical COPD bundle on the management of COPD in a hospital-based ambulatory care clinic. Methods: This retrospective cohort analysis evaluated patients with COPD managed in an outpatient pulmonary clinic. The primary objective of this study was to assess the completion of 4 metrics known to improve the management of COPD: (1) medication therapy management, (2) quality measures including smoking cessation and vaccines, (3) patient adherence, and (4) patient education. The secondary objective was to evaluate the impact of the clinical COPD bundle on clinical and economic outcomes at 30 and 90 days post–initial visit. Results: A total of 138 patients were included in the study; 70 patients served as controls and 68 patients received the COPD bundle from the clinical pharmacist. No patients from the control group had all 4 metrics completed as documented, compared to 66 of the COPD bundle group (P < .0001). Additionally, a statistically significant difference was found in all 4 metrics when evaluated individually. Clinical pharmacy services reduced the number of phone call consults at 90 days (P = .04) but did not have a statistically significant impact on any additional pre-identified clinical outcomes. Conclusion: A pharmacist-driven clinical COPD bundle was associated with significant increases in the completion and documentation of 4 metrics known to improve the outpatient management of COPD.

  13. Modeling Hawaiian Ecosystem Degradation due to Invasive Plants under Current and Future Climates

    PubMed Central

    Vorsino, Adam E.; Fortini, Lucas B.; Amidon, Fred A.; Miller, Stephen E.; Jacobi, James D.; Price, Jonathan P.; Gon, Sam 'Ohukani'ohi'a; Koob, Gregory A.

    2014-01-01

    Occupation of native ecosystems by invasive plant species alters their structure and/or function. In Hawaii, a subset of introduced plants is regarded as extremely harmful due to competitive ability, ecosystem modification, and biogeochemical habitat degradation. By controlling this subset of highly invasive ecosystem modifiers, conservation managers could significantly reduce native ecosystem degradation. To assess the invasibility of vulnerable native ecosystems, we selected a proxy subset of these invasive plants and developed robust ensemble species distribution models to define their respective potential distributions. The combinations of all species models using both binary and continuous habitat suitability projections resulted in estimates of species richness and diversity that were subsequently used to define an invasibility metric. The invasibility metric was defined from species distribution models with <0.7 niche overlap (Warrens I) and relatively discriminative distributions (Area Under the Curve >0.8; True Skill Statistic >0.75) as evaluated per species. Invasibility was further projected onto a 2100 Hawaii regional climate change scenario to assess the change in potential habitat degradation. The distribution defined by the invasibility metric delineates areas of known and potential invasibility under current climate conditions and, when projected into the future, estimates potential reductions in native ecosystem extent due to climate-driven invasive incursion. We have provided the code used to develop these metrics to facilitate their wider use (Code S1). This work will help determine the vulnerability of native-dominated ecosystems to the combined threats of climate change and invasive species, and thus help prioritize ecosystem and species management actions. PMID:24805254

  14. Critical analysis of commonly used fluorescence metrics to characterize dissolved organic matter.

    PubMed

    Korak, Julie A; Dotson, Aaron D; Summers, R Scott; Rosario-Ortiz, Fernando L

    2014-02-01

    The use of fluorescence spectroscopy for the analysis and characterization of dissolved organic matter (DOM) has gained widespread interest over the past decade, in part because of its ease of use and ability to provide bulk DOM chemical characteristics. However, the lack of standard approaches for analysis and data evaluation has complicated its use. This study utilized comparative statistics to systematically evaluate commonly used fluorescence metrics for DOM characterization to provide insight into the implications for data analysis and interpretation such as peak picking methods, carbon-normalized metrics and the fluorescence index (FI). The uncertainty associated with peak picking methods was evaluated, including the reporting of peak intensity and peak position. The linear relationship between fluorescence intensity and dissolved organic carbon (DOC) concentration was found to deviate from linearity at environmentally relevant concentrations and simultaneously across all peak regions. Comparative analysis suggests that the loss of linearity is composition specific and likely due to non-ideal intermolecular interactions of the DOM rather than the inner filter effects. For some DOM sources, Peak A deviated from linearity at optical densities a factor of 2 higher than that of Peak C. For carbon-normalized fluorescence intensities, the error associated with DOC measurements significantly decreases the ability to distinguish compositional differences. An in-depth analysis of FI determined that the metric is mostly driven by peak emission wavelength and less by emission spectra slope. This study also demonstrates that fluorescence intensity follows property balance principles, but the fluorescence index does not. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Development and implementation of a balanced scorecard in an academic hospitalist group.

    PubMed

    Hwa, Michael; Sharpe, Bradley A; Wachter, Robert M

    2013-03-01

    Academic hospitalist groups (AHGs) are often expected to excel in multiple domains: quality improvement, patient safety, education, research, administration, and clinical care. To be successful, AHGs must develop strategies to balance their energies, resources, and performance. The balanced scorecard (BSC) is a strategic management system that enables organizations to translate their mission and vision into specific objectives and metrics across multiple domains. To date, no hospitalist group has reported on BSC implementation. We set out to develop a BSC as part of a strategic planning initiative. Based on a needs assessment of the University of California, San Francisco, Division of Hospital Medicine, mission and vision statements were developed. We engaged representative faculty to develop strategic objectives and determine performance metrics across 4 BSC perspectives. There were 41 metrics identified, and 16 were chosen for the initial BSC. It allowed us to achieve several goals: 1) present a broad view of performance, 2) create transparency and accountability, 3) communicate goals and engage faculty, and 4) ensure we use data to guide strategic decisions. Several lessons were learned, including the need to build faculty consensus, establish metrics with reliable measureable data, and the power of the BSC to drive goals across the division. We successfully developed and implemented a BSC in an AHG as part of a strategic planning initiative. The BSC has been instrumental in allowing us to achieve balanced success in multiple domains. Academic groups should consider employing the BSC as it allows for a data-driven strategic planning and assessment process. Copyright © 2013 Society of Hospital Medicine.

  16. Contrasting changes in the abundance and diversity of North American bird assemblages from 1971 to 2010.

    PubMed

    Schipper, Aafke M; Belmaker, Jonathan; de Miranda, Murilo Dantas; Navarro, Laetitia M; Böhning-Gaese, Katrin; Costello, Mark J; Dornelas, Maria; Foppen, Ruud; Hortal, Joaquín; Huijbregts, Mark A J; Martín-López, Berta; Pettorelli, Nathalie; Queiroz, Cibele; Rossberg, Axel G; Santini, Luca; Schiffers, Katja; Steinmann, Zoran J N; Visconti, Piero; Rondinini, Carlo; Pereira, Henrique M

    2016-12-01

    Although it is generally recognized that global biodiversity is declining, few studies have examined long-term changes in multiple biodiversity dimensions simultaneously. In this study, we quantified and compared temporal changes in the abundance, taxonomic diversity, functional diversity, and phylogenetic diversity of bird assemblages, using roadside monitoring data of the North American Breeding Bird Survey from 1971 to 2010. We calculated 12 abundance and diversity metrics based on 5-year average abundances of 519 species for each of 768 monitoring routes. We did this for all bird species together as well as for four subgroups based on breeding habitat affinity (grassland, woodland, wetland, and shrubland breeders). The majority of the biodiversity metrics increased or remained constant over the study period, whereas the overall abundance of birds showed a pronounced decrease, primarily driven by declines of the most abundant species. These results highlight how stable or even increasing metrics of taxonomic, functional, or phylogenetic diversity may occur in parallel with substantial losses of individuals. We further found that patterns of change differed among the species subgroups, with both abundance and diversity increasing for woodland birds and decreasing for grassland breeders. The contrasting changes between abundance and diversity and among the breeding habitat groups underscore the relevance of a multifaceted approach to measuring biodiversity change. Our findings further stress the importance of monitoring the overall abundance of individuals in addition to metrics of taxonomic, functional, or phylogenetic diversity, thus confirming the importance of population abundance as an essential biodiversity variable. © 2016 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  17. Prioritizing material recovery for end-of-life printed circuit boards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Xue, E-mail: xxw6590@rit.edu; Gaustad, Gabrielle, E-mail: gabrielle.gaustad@rit.edu

    2012-10-15

    Highlights: Black-Right-Pointing-Pointer Material recovery driven by composition, choice of ranking, and weighting. Black-Right-Pointing-Pointer Economic potential for new recycling technologies quantified for several metrics. Black-Right-Pointing-Pointer Indicators developed for materials incurring high eco-toxicity costs. Black-Right-Pointing-Pointer Methodology useful for a variety of stakeholders, particularly policy-makers. - Abstract: The increasing growth in generation of electronic waste (e-waste) motivates a variety of waste reduction research. Printed circuit boards (PCBs) are an important sub-set of the overall e-waste stream due to the high value of the materials contained within them and potential toxicity. This work explores several environmental and economic metrics for prioritizing the recovery ofmore » materials from end-of-life PCBs. A weighted sum model is used to investigate the trade-offs among economic value, energy saving potentials, and eco-toxicity. Results show that given equal weights for these three sustainability criteria gold has the highest recovery priority, followed by copper, palladium, aluminum, tin, lead, platinum, nickel, zinc, and silver. However, recovery priority will change significantly due to variation in the composition of PCBs, choice of ranking metrics, and weighting factors when scoring multiple metrics. These results can be used by waste management decision-makers to quantify the value and environmental savings potential for recycling technology development and infrastructure. They can also be extended by policy-makers to inform possible penalties for land-filling PCBs or exporting to the informal recycling sector. The importance of weighting factors when examining recovery trade-offs, particularly for policies regarding PCB collection and recycling are explored further.« less

  18. Modeling Hawaiian ecosystem degradation due to invasive plants under current and future climates.

    PubMed

    Vorsino, Adam E; Fortini, Lucas B; Amidon, Fred A; Miller, Stephen E; Jacobi, James D; Price, Jonathan P; Gon, Sam 'ohukani'ohi'a; Koob, Gregory A

    2014-01-01

    Occupation of native ecosystems by invasive plant species alters their structure and/or function. In Hawaii, a subset of introduced plants is regarded as extremely harmful due to competitive ability, ecosystem modification, and biogeochemical habitat degradation. By controlling this subset of highly invasive ecosystem modifiers, conservation managers could significantly reduce native ecosystem degradation. To assess the invasibility of vulnerable native ecosystems, we selected a proxy subset of these invasive plants and developed robust ensemble species distribution models to define their respective potential distributions. The combinations of all species models using both binary and continuous habitat suitability projections resulted in estimates of species richness and diversity that were subsequently used to define an invasibility metric. The invasibility metric was defined from species distribution models with <0.7 niche overlap (Warrens I) and relatively discriminative distributions (Area Under the Curve >0.8; True Skill Statistic >0.75) as evaluated per species. Invasibility was further projected onto a 2100 Hawaii regional climate change scenario to assess the change in potential habitat degradation. The distribution defined by the invasibility metric delineates areas of known and potential invasibility under current climate conditions and, when projected into the future, estimates potential reductions in native ecosystem extent due to climate-driven invasive incursion. We have provided the code used to develop these metrics to facilitate their wider use (Code S1). This work will help determine the vulnerability of native-dominated ecosystems to the combined threats of climate change and invasive species, and thus help prioritize ecosystem and species management actions.

  19. Amino acid distribution in meteorites: diagenesis, extraction methods, and standard metrics in the search for extraterrestrial biosignatures.

    PubMed

    McDonald, Gene D; Storrie-Lombardi, Michael C

    2006-02-01

    The relative abundance of the protein amino acids has been previously investigated as a potential marker for biogenicity in meteoritic samples. However, these investigations were executed without a quantitative metric to evaluate distribution variations, and they did not account for the possibility of interdisciplinary systematic error arising from inter-laboratory differences in extraction and detection techniques. Principal component analysis (PCA), hierarchical cluster analysis (HCA), and stochastic probabilistic artificial neural networks (ANNs) were used to compare the distributions for nine protein amino acids previously reported for the Murchison carbonaceous chondrite, Mars meteorites (ALH84001, Nakhla, and EETA79001), prebiotic synthesis experiments, and terrestrial biota and sediments. These techniques allowed us (1) to identify a shift in terrestrial amino acid distributions secondary to diagenesis; (2) to detect differences in terrestrial distributions that may be systematic differences between extraction and analysis techniques in biological and geological laboratories; and (3) to determine that distributions in meteoritic samples appear more similar to prebiotic chemistry samples than they do to the terrestrial unaltered or diagenetic samples. Both diagenesis and putative interdisciplinary differences in analysis complicate interpretation of meteoritic amino acid distributions. We propose that the analysis of future samples from such diverse sources as meteoritic influx, sample return missions, and in situ exploration of Mars would be less ambiguous with adoption of standardized assay techniques, systematic inclusion of assay standards, and the use of a quantitative, probabilistic metric. We present here one such metric determined by sequential feature extraction and normalization (PCA), information-driven automated exploration of classification possibilities (HCA), and prediction of classification accuracy (ANNs).

  20. A Literature Review and Experimental Plan for Research on the Display of Information on Matrix-Addressable Displays.

    DTIC Science & Technology

    1987-02-01

    Factors Laboratory, Department of Industria AREA 6 WORK UNIT NUAE1 Engineering and Operations Research, Virginia Pol - technic Institute & State Univ...Symbolic Research 105 Experiment 14: Multichromatic Optimum Character Symbolic 105 Summary 105 Quality Metrics Analysis 105 REFERENCES 107 ANNOTATED...17.52 12.26 9.43 7.66 6.45 5.57 An analysis of variance was performed on accuracy and response time data. For accuracy data there was a significant

  1. Earned Value-Added

    NASA Technical Reports Server (NTRS)

    Jansen, Michael

    2005-01-01

    Earned value management [EVM] ...either you swear by it, or swear at it. Either way, there s no getting around the fact that EVM can be one of the most efficient and insightful methods of synthesizing cost, schedule, and technical status information into a single set of program health metrics. Is there a way of implementing EVM that allows a program to reap its early warning benefits while avoiding the pitfalls that make it infamous to its detractors? That s the question recently faced by the International Space Station [ISS] program.

  2. Adapting an Agent-Based Model of Socio-Technical Systems to Analyze System and Security Failures

    DTIC Science & Technology

    2016-05-09

    statistically significant amount, which it did with a p-valueɘ.0003 on a simulation of 3125 iterations; the data is shown in the Delegation 1 column of...Blackout metric to a statistically significant amount, with a p-valueɘ.0003 on a simulation of 3125 iterations; the data is shown in the Delegation 2...Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1, pp. 1007- 1014 . International Foundation

  3. A Correlation Between Quality Management Metrics and Technical Performance Measurement

    DTIC Science & Technology

    2007-03-01

    Engineering Working Group SME Subject Matter Expert SoS System of Systems SPI Schedule performance Index SSEI System of Systems Engineering and...and stated as such [Q, M , M &G]. The QMM equation is given by: 12 QMM=0.92RQM+0.67EPM+0.55RKM+1.86PM, where: RGM is the requirements management...schedule. Now if corrective action is not taken, the project/task will be completed behind schedule and over budget. m . As well as the derived

  4. Conceptual design and issues of the laser inertial fusion test (LIFT) reactor—targets and chamber systems

    NASA Astrophysics Data System (ADS)

    Norimatsu, T.; Kozaki, Y.; Shiraga, H.; Fujita, H.; Okano, K.; Members of LIFT Design Team

    2017-11-01

    We present the conceptual design of an experimental laser fusion plant known as the laser inertial fusion test (LIFT) reactor. The conceptual design aims at technically connecting a single-shot experiment and a commercial power plant. The LIFT reactor is designed on a three-phase scheme, where each phase has specific goals and the dedicated chambers of each phase are driven by the same laser. Technical issues related to the chamber technology including radiation safety to repeat burst mode operation are discussed in this paper.

  5. Installed Base as a Facilitator for User-Driven Innovation: How Can User Innovation Challenge Existing Institutional Barriers?

    PubMed Central

    Andersen, Synnøve Thomassen; Jansen, Arild

    2012-01-01

    The paper addresses an ICT-based, user-driven innovation process in the health sector in rural areas in Norway. The empirical base is the introduction of a new model for psychiatric health provision. This model is supported by a technical solution based on mobile phones that is aimed to help the communication between professional health personnel and patients. This innovation was made possible through the use of standard mobile technology rather than more sophisticated systems. The users were heavily involved in the development work. Our analysis shows that by thinking simple and small-scale solutions, including to take the user's needs and premises as a point of departure rather than focusing on advanced technology, the implementation process was made possible. We show that by combining theory on information infrastructures, user-oriented system development, and innovation in a three-layered analytical framework, we can explain the interrelationship between technical, organizational, and health professional factors that made this innovation a success. PMID:23304134

  6. Future Change to Tide-Influenced Deltas

    NASA Astrophysics Data System (ADS)

    Nienhuis, Jaap H.; Hoitink, A. J. F. (Ton); Törnqvist, Torbjörn E.

    2018-04-01

    Tides tend to widen deltaic channels and shape delta morphology. Here we present a predictive approach to assess a priori the effect of fluvial discharge and tides on deltaic channels. We show that downstream channel widening can be quantified by the ratio of the tide-driven discharge and the fluvial discharge, along with a second metric representing flow velocities. A test of our new theory on a selection of 72 deltas globally shows good correspondence to a wide range of environments, including wave-dominated deltas, river-dominated deltas, and alluvial estuaries. By quantitatively relating tides and fluvial discharge to delta morphology, we offer a first-order prediction of deltaic change that may be expected from altered delta hydrology. For example, we expect that reduced fluvial discharge in response to dam construction will lead to increased tidal intrusion followed by enhanced tide-driven sediment import into deltas, with implications for navigation and other human needs.

  7. Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks.

    PubMed

    Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan

    2017-06-26

    Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H²RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H²RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller.

  8. Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness

    PubMed Central

    Pimentel-Niño, M. A.; Saxena, Paresh; Vazquez-Castro, M. A.

    2015-01-01

    A novel cross-layer optimized video adaptation driven by perceptual semantics is presented. The design target is streamed live video to enhance situational awareness in challenging communications conditions. Conventional solutions for recreational applications are inadequate and novel quality of experience (QoE) framework is proposed which allows fully controlled adaptation and enables perceptual semantic feedback. The framework relies on temporal/spatial abstraction for video applications serving beyond recreational purposes. An underlying cross-layer optimization technique takes into account feedback on network congestion (time) and erasures (space) to best distribute available (scarce) bandwidth. Systematic random linear network coding (SRNC) adds reliability while preserving perceptual semantics. Objective metrics of the perceptual features in QoE show homogeneous high performance when using the proposed scheme. Finally, the proposed scheme is in line with content-aware trends, by complying with information-centric-networking philosophy and architecture. PMID:26247057

  9. Task-Driven Comparison of Topic Models.

    PubMed

    Alexander, Eric; Gleicher, Michael

    2016-01-01

    Topic modeling, a method of statistically extracting thematic content from a large collection of texts, is used for a wide variety of tasks within text analysis. Though there are a growing number of tools and techniques for exploring single models, comparisons between models are generally reduced to a small set of numerical metrics. These metrics may or may not reflect a model's performance on the analyst's intended task, and can therefore be insufficient to diagnose what causes differences between models. In this paper, we explore task-centric topic model comparison, considering how we can both provide detail for a more nuanced understanding of differences and address the wealth of tasks for which topic models are used. We derive comparison tasks from single-model uses of topic models, which predominantly fall into the categories of understanding topics, understanding similarity, and understanding change. Finally, we provide several visualization techniques that facilitate these tasks, including buddy plots, which combine color and position encodings to allow analysts to readily view changes in document similarity.

  10. Effects of two classification strategies on a Benthic Community Index for streams in the Northern Lakes and Forests Ecoregion

    USGS Publications Warehouse

    Butcher, Jason T.; Stewart, Paul M.; Simon, Thomas P.

    2003-01-01

    Ninety-four sites were used to analyze the effects of two different classification strategies on the Benthic Community Index (BCI). The first, a priori classification, reflected the wetland status of the streams; the second, a posteriori classification, used a bio-environmental analysis to select classification variables. Both classifications were examined by measuring classification strength and testing differences in metric values with respect to group membership. The a priori (wetland) classification strength (83.3%) was greater than the a posteriori (bio-environmental) classification strength (76.8%). Both classifications found one metric that had significant differences between groups. The original index was modified to reflect the wetland classification by re-calibrating the scoring criteria for percent Crustacea and Mollusca. A proposed refinement to the original Benthic Community Index is suggested. This study shows the importance of using hypothesis-driven classifications, as well as exploratory statistical analysis, to evaluate alternative ways to reveal environmental variability in biological assessment tools.

  11. Evaluating an accelerated nursing program: a dashboard for diversity.

    PubMed

    Schmidt, Bonnie J; MacWilliams, Brent R

    2015-01-01

    Diversity is a topic of increasing attention in higher education and the nursing workforce. Experts have called for a nursing workforce that mirrors the population it serves. Students in nursing programs in the United States do not reflect our country's diverse population; therefore, much work is needed before that goal can be reached. Diversity cannot be successfully achieved in nursing education without inclusion and attention to quality. The Inclusive Excellence framework can be used by nurse educators to promote inclusion, diversity, and excellence. In this framework, excellence and diversity are linked in an intentional metric-driven process. Accelerated programs offer a possible venue to promote diversity, and one accelerated program is examined using a set of metrics and a dashboard approach commonly used in business settings. Several recommendations were made for future assessment, interventions, and monitoring. Nurse educators are called to examine and adopt a diversity dashboard in all nursing programs. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Biasogram: Visualization of Confounding Technical Bias in Gene Expression Data

    PubMed Central

    Krzystanek, Marcin; Szallasi, Zoltan; Eklund, Aron C.

    2013-01-01

    Gene expression profiles of clinical cohorts can be used to identify genes that are correlated with a clinical variable of interest such as patient outcome or response to a particular drug. However, expression measurements are susceptible to technical bias caused by variation in extraneous factors such as RNA quality and array hybridization conditions. If such technical bias is correlated with the clinical variable of interest, the likelihood of identifying false positive genes is increased. Here we describe a method to visualize an expression matrix as a projection of all genes onto a plane defined by a clinical variable and a technical nuisance variable. The resulting plot indicates the extent to which each gene is correlated with the clinical variable or the technical variable. We demonstrate this method by applying it to three clinical trial microarray data sets, one of which identified genes that may have been driven by a confounding technical variable. This approach can be used as a quality control step to identify data sets that are likely to yield false positive results. PMID:23613961

  13. Neural processing of musical meter in musicians and non-musicians.

    PubMed

    Zhao, T Christina; Lam, H T Gloria; Sohi, Harkirat; Kuhl, Patricia K

    2017-11-01

    Musical sounds, along with speech, are the most prominent sounds in our daily lives. They are highly dynamic, yet well structured in the temporal domain in a hierarchical manner. The temporal structures enhance the predictability of musical sounds. Western music provides an excellent example: while time intervals between musical notes are highly variable, underlying beats can be realized. The beat-level temporal structure provides a sense of regular pulses. Beats can be further organized into units, giving the percept of alternating strong and weak beats (i.e. metrical structure or meter). Examining neural processing at the meter level offers a unique opportunity to understand how the human brain extracts temporal patterns, predicts future stimuli and optimizes neural resources for processing. The present study addresses two important questions regarding meter processing, using the mismatch negativity (MMN) obtained with electroencephalography (EEG): 1) how tempo (fast vs. slow) and type of metrical structure (duple: two beats per unit vs. triple: three beats per unit) affect the neural processing of metrical structure in non-musically trained individuals, and 2) how early music training modulates the neural processing of metrical structure. Metrical structures were established by patterns of consecutive strong and weak tones (Standard) with occasional violations that disrupted and reset the structure (Deviant). Twenty non-musicians listened passively to these tones while their neural activities were recorded. MMN indexed the neural sensitivity to the meter violations. Results suggested that MMNs were larger for fast tempo and for triple meter conditions. Further, 20 musically trained individuals were tested using the same methods and the results were compared to the non-musicians. While tempo and meter type similarly influenced MMNs in both groups, musicians overall exhibited significantly reduced MMNs, compared to their non-musician counterparts. Further analyses indicated that the reduction was driven by responses to sounds that defined the structure (Standard), not by responses to Deviants. We argue that musicians maintain a more accurate and efficient mental model for metrical structures, which incorporates occasional disruptions using significantly fewer neural resources. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Future Midwest Heat Waves in WRF

    NASA Astrophysics Data System (ADS)

    Huber, M.; Buzan, J. R.; Yoo, J.

    2017-12-01

    We present heat stress results for the upper Midwest derived from convection resolving Weather Research and Forecasting (WRF) model simulations carried out for the RCP 8.5 Scenario and driven by Community Earth System Model (CESM) boundary conditions as part of the Indiana Climate Change Assessment. Using this modeling system we find widespread and severe increases in moist heat stress metrics in the Midwest by end of century. We detail scaling arguments that suggest our results are robust and not model dependent and describe potential health, welfare, and productivity implications of these results.

  15. Earthdata Search: Scaling, Assessing and Improving Relevancy

    NASA Technical Reports Server (NTRS)

    Reese, Mark

    2016-01-01

    NASA's Earthdata Search (https:search.earthdata.nasa.gov) application allows users to search, discover, visualize, and access NASA and international interagency data about the Earth. As a client to NASA's Common Metadata Repository (CMR), its catalog of data collections grew 700 in late 2015. This massive expansion brought improved search and discovery to the forefront of the client's usability needs. During this talk, we will give a brief overview of the application, the challenges that arose during this period of growth, the metrics-driven way we addressed them, and the latest outcomes.

  16. Right Brain: The E-lephant in the room: One resident's challenge in transitioning to modern electronic medicine.

    PubMed

    Strowd, Roy E

    2014-09-23

    The electronic medical record (EMR) is changing the landscape of medical practice in the modern age. Increasing emphasis on quality metric reporting, data-driven documentation, and timely coding and billing are pressuring institutions across the country to adopt the latest EMR technology. The impact of these systems on the patient-physician relationship is profound. One year following the latest EMR transition, one resident reviews his experience and provides a personal perspective on the impact the EMR on patient-physician communication. © 2014 American Academy of Neurology.

  17. Discovering Central Practitioners in a Medical Discussion Forum Using Semantic Web Analytics.

    PubMed

    Rajabi, Enayat; Abidi, Syed Sibte Raza

    2017-01-01

    The aim of this paper is to investigate semantic web based methods to enrich and transform a medical discussion forum in order to perform semantics-driven social network analysis. We use the centrality measures as well as semantic similarity metrics to identify the most influential practitioners within a discussion forum. The centrality results of our approach are in line with centrality measures produced by traditional SNA methods, thus validating the applicability of semantic web based methods for SNA, particularly for analyzing social networks for specialized discussion forums.

  18. Depictive and metric body size estimation in anorexia nervosa and bulimia nervosa: A systematic review and meta-analysis.

    PubMed

    Mölbert, Simone Claire; Klein, Lukas; Thaler, Anne; Mohler, Betty J; Brozzo, Chiara; Martus, Peter; Karnath, Hans-Otto; Zipfel, Stephan; Giel, Katrin Elisabeth

    2017-11-01

    A distorted representation of one's own body is a diagnostic criterion and core psychopathology of both anorexia nervosa (AN) and bulimia nervosa (BN). Despite recent technical advances in research, it is still unknown whether this body image disturbance is characterized by body dissatisfaction and a low ideal weight and/or includes a distorted perception or processing of body size. In this article, we provide an update and meta-analysis of 42 articles summarizing measures and results for body size estimation (BSE) from 926 individuals with AN, 536 individuals with BN and 1920 controls. We replicate findings that individuals with AN and BN overestimate their body size as compared to controls (ES=0.63). Our meta-regression shows that metric methods (BSE by direct or indirect spatial measures) yield larger effect sizes than depictive methods (BSE by evaluating distorted pictures), and that effect sizes are larger for patients with BN than for patients with AN. To interpret these results, we suggest a revised theoretical framework for BSE that accounts for differences between depictive and metric BSE methods regarding the underlying body representations (conceptual vs. perceptual, implicit vs. explicit). We also discuss clinical implications and argue for the importance of multimethod approaches to investigate body image disturbance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Performance metrics for inertial confinement fusion implosions: Aspects of the technical framework for measuring progress in the National Ignition Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, Brian K.; Glenzer, S.; Edwards, M. J.

    The National Ignition Campaign (NIC) uses non-igniting 'tritium hydrogen deuterium (THD)' capsules to study and optimize the hydrodynamic assembly of the fuel without burn. These capsules are designed to simultaneously reduce DT neutron yield and to maintain hydrodynamic similarity with the DT ignition capsule. We will discuss nominal THD performance and the associated experimental observables. We will show the results of large ensembles of numerical simulations of THD and DT implosions and their simulated diagnostic outputs. These simulations cover a broad range of both nominal and off-nominal implosions. We will focus on the development of an experimental implosion performance metricmore » called the experimental ignition threshold factor (ITFX). We will discuss the relationship between ITFX and other integrated performance metrics, including the ignition threshold factor (ITF), the generalized Lawson criterion (GLC), and the hot spot pressure (HSP). We will then consider the experimental results of the recent NIC THD campaign. We will show that we can observe the key quantities for producing a measured ITFX and for inferring the other performance metrics. We will discuss trends in the experimental data, improvement in ITFX, and briefly the upcoming tuning campaign aimed at taking the next steps in performance improvement on the path to ignition on NIF.« less

  20. Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Barta, Daniel J.

    2004-01-01

    This presentation is planned to be a 10-15 minute "catalytic" focused presentation to be scheduled during one of the working sessions at the TIM. This presentation will focus on Advanced Life Support technologies key to future human Space Exploration as outlined in the Vision, and will include basic requirements, assessment of the state-of-the-art and gaps, and include specific technology metrics. The presentation will be technical in character, lean heavily on data in published ALS documents (such as the Baseline Values and Assumptions Document) but not provide specific technical details or build to information on any technology mentioned (thus the presentation will be benign from an export control and a new technology perspective). The topics presented will be focused on the following elements of Advanced Life Support: air revitalization, water recovery, waste management, thermal control, habitation systems, food systems and bioregenerative life support.

  1. MO-FG-207-03: Maximizing the Utility of Integrated PET/MRI in Clinical Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behr, S.

    2015-06-15

    The use of integrated PET/MRI systems in clinical applications can best benefit from understanding their technological advances and limitations. The currently available clinical PET/MRI systems have their own characteristics. Thorough analyses of existing technical data and evaluation of necessary performance metrics for quality assurances could be conducted to optimize application-specific PET/MRI protocols. This Symposium will focus on technical advances and limitations of clinical PET/MRI systems, and how this exciting imaging modality can be utilized in applications that can benefit from both PET and MRI. Learning Objectives: To understand the technological advances of clinical PET/MRI systems To correctly identify clinical applicationsmore » that can benefit from PET/MRI To understand ongoing work to further improve the current PET/MRI technology Floris Jansen is a GE Healthcare employee.« less

  2. MO-FG-207-00: Technological Advances in PET/MR Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    The use of integrated PET/MRI systems in clinical applications can best benefit from understanding their technological advances and limitations. The currently available clinical PET/MRI systems have their own characteristics. Thorough analyses of existing technical data and evaluation of necessary performance metrics for quality assurances could be conducted to optimize application-specific PET/MRI protocols. This Symposium will focus on technical advances and limitations of clinical PET/MRI systems, and how this exciting imaging modality can be utilized in applications that can benefit from both PET and MRI. Learning Objectives: To understand the technological advances of clinical PET/MRI systems To correctly identify clinical applicationsmore » that can benefit from PET/MRI To understand ongoing work to further improve the current PET/MRI technology Floris Jansen is a GE Healthcare employee.« less

  3. Deriving video content type from HEVC bitstream semantics

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.

    2014-05-01

    As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can be used in a timely manner to aid decision making in QoE oriented adaptive real time streaming.

  4. Data-driven management using quantitative metric and automatic auditing program (QMAP) improves consistency of radiation oncology processes.

    PubMed

    Yu, Naichang; Xia, Ping; Mastroianni, Anthony; Kolar, Matthew D; Chao, Samuel T; Greskovich, John F; Suh, John H

    Process consistency in planning and delivery of radiation therapy is essential to maintain patient safety and treatment quality and efficiency. Ensuring the timely completion of each critical clinical task is one aspect of process consistency. The purpose of this work is to report our experience in implementing a quantitative metric and automatic auditing program (QMAP) with a goal of improving the timely completion of critical clinical tasks. Based on our clinical electronic medical records system, we developed a software program to automatically capture the completion timestamp of each critical clinical task while providing frequent alerts of potential delinquency. These alerts were directed to designated triage teams within a time window that would offer an opportunity to mitigate the potential for late completion. Since July 2011, 18 metrics were introduced in our clinical workflow. We compared the delinquency rates for 4 selected metrics before the implementation of the metric with the delinquency rate of 2016. One-tailed Student t test was used for statistical analysis RESULTS: With an average of 150 daily patients on treatment at our main campus, the late treatment plan completion rate and late weekly physics check were reduced from 18.2% and 8.9% in 2011 to 4.2% and 0.1% in 2016, respectively (P < .01). The late weekly on-treatment physician visit rate was reduced from 7.2% in 2012 to <1.6% in 2016. The yearly late cone beam computed tomography review rate was reduced from 1.6% in 2011 to <0.1% in 2016. QMAP is effective in reducing late completions of critical tasks, which can positively impact treatment quality and patient safety by reducing the potential for errors resulting from distractions, interruptions, and rush in completion of critical tasks. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  5. MINE WASTE TECHNOLOGY PROGRAM:HISTORICAL PERSPECTIVES. CURRENT HIGHLIGHTS, FUTURE OPPORTUNITIES

    EPA Science Inventory

    For the past 13 years, the Mine Waste Technology Program has been technically driven by the National Risk Management Research Lab. A portion of the MWTP funding has been used to perform field demonstrations of innovative technologies with the potential to address mine waste issue...

  6. Globalizing Flexible Work in Universities: Socio-Technical Dilemmas in Internationalizing Education

    ERIC Educational Resources Information Center

    Singh, Michael; Han, Jinghe

    2005-01-01

    Contemporary transitions in political and economic globalization are being used to press universities into becoming "transnational businesses," seemingly driven by a primary concern for marketing educational commodities. The neo-liberal politics driving these currents in universities are increasing the multiple online and offline…

  7. Converting quadratic entropy to diversity: Both animals and alleles are diverse, but some are more diverse than others

    PubMed Central

    2017-01-01

    The use of diversity metrics has a long history in population ecology, while population genetic work has been dominated by variance-derived metrics instead, a technical gap that has slowed cross-communication between the fields. Interestingly, Rao’s Quadratic Entropy (RQE), comparing elements for ‘degrees of divergence’, was originally developed for population ecology, but has recently been deployed for evolutionary studies. We here translate RQE into a continuous diversity analogue, and then construct a multiply nested diversity partition for alleles, individuals, populations, and species, each component of which exhibits the behavior of proper diversity metrics, and then translate these components into [0,1]—scaled form. We also deploy non-parametric statistical tests of the among-stratum components and novel tests of the homogeneity of within-stratum diversity components at any hierarchical level. We then illustrate this new analysis with eight nSSR loci and a pair of close Australian marsupial (Antechinus) congeners, using both ‘different is different’ and ‘degree of difference’ distance metrics. The total diversity in the collection is larger than that within either species, but most of the within-species diversity is resident within single populations. The combined A. agilis collection exhibits more diversity than does the combined A. stuartii collection, possibly attributable to localized differences in either local ecological disturbance regimes or differential levels of population isolation. Beyond exhibiting different allelic compositions, the two congeners are becoming more divergent for the arrays of allele sizes they possess. PMID:29088229

  8. Technical Note: Error metrics for estimating the accuracy of needle/instrument placement during transperineal magnetic resonance/ultrasound-guided prostate interventions.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Villarini, Barbara; Rodell, Rachael; Martin, Paul; Han, Lianghao; Donaldson, Ian; Ahmed, Hashim U; Moore, Caroline M; Emberton, Mark; Barratt, Dean C

    2018-04-01

    Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. A set of nine measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0 ± 1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0 ± 1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. The application of a comprehensive, unbiased validation assessment for MR/US guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behavior of these systems. © 2018 American Association of Physicists in Medicine.

  9. Scientific and technical factors affecting the setting of Salmonella criteria for raw poultry: a global perspective.

    PubMed

    Mead, Geoffrey; Lammerding, Anna M; Cox, Nelson; Doyle, Michael P; Humbert, Florence; Kulikovskiy, Alexander; Panin, Alexander; do Nascimento, Vladimir Pinheiro; Wierup, Martin

    2010-08-01

    Concerns about foodborne salmonellosis have led many countries to introduce microbiological criteria for certain food products. If such criteria are not well-grounded in science, they could be an unjustified obstacle to trade. Raw poultry products are an important part of the global food market. Import and export ambiguities and regulatory confusion resulting from different Salmonella requirements were the impetus for convening an international group of scientific experts from 16 countries to discuss the scientific and technical issues that affect the setting of a microbiological criterion for Salmonella contamination of raw chicken. A particular concern for the group was the use of criteria implying a zero tolerance for Salmonella and suggesting complete absence of the pathogen. The notion can be interpreted differently by various stakeholders and was considered inappropriate because there is neither an effective means of eliminating Salmonella from raw poultry nor any practical method for verifying its absence. Therefore, it may be more useful at present to set food safety metrics that involve reductions in hazard levels. Such terms as "zero tolerance" or "absence of a microbe" in relation to raw poultry should be avoided unless defined and explained by international agreement. Risk assessment provides a more meaningful approach than a zero tolerance philosophy, and new metrics, such as performance objectives that are linked to human health outcomes, should be utilized throughout the food chain to help define risk and identify ways to reduce adverse effects on public health.

  10. Core Noise Reduction

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.

    2011-01-01

    This presentation is a technical summary of and outlook for NASA-internal and NASA-sponsored external research on core (combustor and turbine) noise funded by the Fundamental Aeronautics Program Subsonic Fixed Wing (SFW) Project. Sections of the presentation cover: the SFW system-level noise metrics for the 2015, 2020, and 2025 timeframes; turbofan design trends and their aeroacoustic implications; the emerging importance of core noise and its relevance to the SFW Reduce-Perceived-Noise Technical Challenge; and the current research activities in the core noise area. Recent work1 on the turbine-transmission loss of combustor noise is briefly described, two2,3 new NRA efforts in the core-noise area are outlined, and an effort to develop CMC-based acoustic liners for broadband noise reduction suitable for turbofan-core application is delineated. The NASA Fundamental Aeronautics Program has the principal objective of overcoming today's national challenges in air transportation. The reduction of aircraft noise is critical to enabling the anticipated large increase in future air traffic. The Subsonic Fixed Wing Project's Reduce-Perceived-Noise Technical Challenge aims to develop concepts and technologies to dramatically reduce the perceived aircraft noise outside of airport boundaries.

  11. NASA/WVU Software Research Laboratory, 1995

    NASA Technical Reports Server (NTRS)

    Sabolish, George J.; Callahan, John R.

    1995-01-01

    In our second year, the NASA/WVU Software Research Lab has made significant strides toward analysis and solution of major software problems related to V&V activities. We have established working relationships with many ongoing efforts within NASA and continue to provide valuable input into policy and decision-making processes. Through our publications, technical reports, lecture series, newsletters, and resources on the World-Wide-Web, we provide information to many NASA and external parties daily. This report is a summary and overview of some of our activities for the past year. This report is divided into 6 chapters: Introduction, People, Support Activities, Process, Metrics, and Testing. The Introduction chapter (this chapter) gives an overview of our project beginnings and targets. The People chapter focuses on new people who have joined the Lab this year. The Support chapter briefly lists activities like our WWW pages, Technical Report Series, Technical Lecture Series, and Research Quarterly newsletter. Finally, the remaining four chapters discuss the major research areas that we have made significant progress towards producing meaningful task reports. These chapters can be regarded as portions of drafts of our task reports.

  12. The NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Paulson, Sharon S.; Binkley, Robert L.; Kellogg, Yvonne D.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael J.; Accomazzi, Alberto

    1995-01-01

    The National Aeronautics and Space Act of 1958 established NASA and charged it to "provide for the widest practicable and appropriate dissemination of information concerning its activities and the results thereof." The search for innovative methods to distribute NASA's information lead a grass-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. The NTRS is comprised of several units, some constructed especially for inclusion in NTRS, and others that are existing NASA publication services that NTRS reuses. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the service. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained ensures that NASA's institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  13. A research review on clinical needs, technical requirements, and normativity in the design of surgical robots.

    PubMed

    Díaz, Carlos Eduardo; Fernández, Roemi; Armada, Manuel; García, Felipe

    2017-12-01

    Nowadays robots play an important role in society, mainly due to the significant benefits they provide when utilized for assisting human beings in the execution of dangerous or repetitive tasks. Medicine is one of the fields in which robots are gaining greater use and development, especially those employed in minimally invasive surgery (MIS). However, due to the particular conditions of the human body where robots have to act, the design of these systems is complex, not only from a technical point of view, but also because the clinical needs and the normativity aspects are important considerations that have to be taken into account in order to achieve better performances and more secure systems for patients and surgeons. Thus, this paper explores the clinical needs and the technical requirements that will trace the roadmap for the next scientific and technological advances in the field of robotic surgery, the metrics that should be defined for safe technology development and the standards that are being elaborated for boosting the industry and facilitating systems integration. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Technical College Graduate Perceptions of College and Career Readiness

    ERIC Educational Resources Information Center

    Hanson, Dale M.

    2013-01-01

    The United States workplace requires increased levels of postsecondary education to support workforce development for an economy driven by technology, automation and global competition. By 2018, 63 % of new jobs created will require postsecondary education (Carnevale, Smith, & Strohl, 2010). Currently, one in four graduates earns a bachelor's…

  15. The Progression of Podcasting/Vodcasting in a Technical Physics Class

    ERIC Educational Resources Information Center

    Glanville, Y. J.

    2010-01-01

    Technology such as Microsoft PowerPoint presentations, clickers, podcasting, and learning management suites is becoming prevalent in classrooms. Instructors are using these media in both large lecture hall settings and small classrooms with just a handful of students. Traditionally, each of these media is instructor driven. For instance,…

  16. Preliminary Training Proposal for Cessna Aircraft of Independence.

    ERIC Educational Resources Information Center

    Independence Community Coll., KS.

    This proposal for a program designed to train workers to manufacture single-engine, piston-driven aircraft for Cessna Corporation was developed by Independence Community College in conjunction with Pittsburgh State University (Kansas) and the Southeast Kansas Area Vocational-Technical School. The proposal provides for on-site training in a…

  17. Building a Sustainable Project Management Capacity in Cyprus

    ERIC Educational Resources Information Center

    Kelly, Steven J.; Esque, Timm J.; Novak, M. Mari; Cermakova, Anna

    2012-01-01

    The performance-driven project management program examined in this article was funded to support a variety of technical assistance efforts designed to strengthen the performance of small and medium enterprises in the Turkish Cypriot community in Cyprus. The customized program combined progressive workshops with hands-on and distance coaching by…

  18. Calibration of Resistance Factors Needed in the LRFD Design of Driven Piles : LTRC technical summary report 449.

    DOT National Transportation Integrated Search

    2009-05-01

    The allowable stress design (ASD) method had been used for many years in the design of bridges, which involves : applying a factor of safety (FS) to account for uncertainties in applied loads and soil resistance. The magnitude of : FS depends on the ...

  19. Developing Occupational Programs: A Case Study of Four Arkansas Community Colleges

    ERIC Educational Resources Information Center

    Doyle, Duane Edward

    2011-01-01

    This study examines how differences in the environmental conditions and organizational factors facing each community college contribute to the development of occupational and technical education programs. This study was driven by one primary research question: What environmental conditions and organizational factors influence the nature of the…

  20. Quo Vadimus? The 21st Century and Multimedia.

    ERIC Educational Resources Information Center

    Kuhn, Allan D.

    This paper relates the concept of computer-driven multimedia to the National Aeronautics and Space Administration (NASA) Scientific and Technical Information Program (STIP). Multimedia is defined here as computer integration and output of text, animation, audio, video, and graphics. Multimedia is the stage of computer-based information that allows…

  1. The New Morbidity and the Prevention of Mental Retardation.

    ERIC Educational Resources Information Center

    Baumeister, Alfred A.

    1988-01-01

    Efforts to prevent mental retardation have been encumbered by lack of scientific and technical knowledge, vague understanding of incidence and prevalence, and scarcity of resources to implement effective public policies. Scientific and social progress toward prevention has pursued a wavelike, erratic course, driven primarily by prevailing social,…

  2. Cost and Precision of Brownian Clocks

    NASA Astrophysics Data System (ADS)

    Barato, Andre C.; Seifert, Udo

    2016-10-01

    Brownian clocks are biomolecular networks that can count time. A paradigmatic example are proteins that go through a cycle, thus regulating some oscillatory behavior in a living system. Typically, such a cycle requires free energy often provided by ATP hydrolysis. We investigate the relation between the precision of such a clock and its thermodynamic costs. For clocks driven by a constant thermodynamic force, a given precision requires a minimal cost that diverges as the uncertainty of the clock vanishes. In marked contrast, we show that a clock driven by a periodic variation of an external protocol can achieve arbitrary precision at arbitrarily low cost. This result constitutes a fundamental difference between processes driven by a fixed thermodynamic force and those driven periodically. As a main technical tool, we map a periodically driven system with a deterministic protocol to one subject to an external protocol that changes in stochastic time intervals, which simplifies calculations significantly. In the nonequilibrium steady state of the resulting bipartite Markov process, the uncertainty of the clock can be deduced from the calculable dispersion of a corresponding current.

  3. Light-emitting diode technology status and directions: Opportunities for horticultural lighting

    DOE PAGES

    Tsao, Jeffrey Y.; Pattison, P. Morgan; Krames, Michael R.

    2016-01-01

    Here, light-emitting diode (LED) technology has advanced rapidly over the last decade, primarily driven by display and general illumination applications ("solid-state lighting (SSL) for humans"). These advancements have made LED lighting technically and economically advantageous not only for these applications, but also, as an indirect benefit, for adjacent applications such as horticultural lighting ("SSL for plants"). Moreover, LED technology has much room for continued improvement. In the near-term, these improvements will continue to be driven by SSL for humans (with indirect benefit to SSL for plants), the most important of which can be anticipated.

  4. Impact of holistic review on student interview pool diversity.

    PubMed

    Grabowski, Christina J

    2017-12-29

    Diversity in the physician workforce lags behind the rapidly changing US population. Since the gateway to becoming a physician is medical school, diversity must be addressed in the admissions process. The Association of American Medical Colleges has implemented a Holistic Review Initiative aimed at assisting medical schools with broadening admission criteria to include relevant, mission-driven attributes and experiences in addition to academic preparation to identify applicants poised to meet the needs of a diverse patient population. More evidence is needed to determine whether holistic review results in a more diverse selection process. One of the keys to holistic review is to apply holistic principles in all stages of the selection process to ensure qualified applicants are not overlooked. This study examines whether the use of holistic review during application screening at a new medical school increased the diversity of applicants selected for interview. Using retrospective data from the first five application cycles at the Oakland University William Beaumont School of Medicine (OUWB), the author compared demographic and experiential differences between the applicants selected using holistic review, including experiences, attributes and academic metrics, to a test sample selected solely using academic metrics. The dataset consisted of the total group of applicants selected for interview in 2011 through 2015 using holistic review (n = 2773) and the same number of applicants who would have been selected for an interview using an academic-only selection model (n = 2773), which included 1204 applicants who were selected using both methods (final n = 4342). The author used a combination of cross-tabulation and analysis of variance to identify differences between applicants selected using holistic review and applicants in the test sample selected using only academics. The holistic review process yielded a significantly higher than expected percent of female (adj. resid. = 13.2, p < .01), traditionally underrepresented in medicine (adj. resid. = 15.8, p < .01), first generation (adj. resid. = 5.8, p < .01), and self-identified disadvantaged (adj resid. = 11.5, p < .01) applicants in the interview pool than selected using academic metrics alone. In addition, holistically selected applicants averaged significantly more hours than academically selected students in the areas of pre-medical school paid employment (F = 10.99, mean difference = 657.99, p < .01) and community service (F = 15.36, mean difference = 475.58, p < .01). Using mission-driven, holistic admissions criteria comprised of applicant attributes and experiences in addition to academic metrics resulted in a more diverse interview pool than using academic metrics alone. These findings add support for the use of holistic review in the application screening process as a means for increasing diversity in medical school interview pools.

  5. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    NASA Astrophysics Data System (ADS)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  6. Exploring sales data during a healthy corner store intervention in Toronto: the Food Retail Environments Shaping Health (FRESH) project.

    PubMed

    Minaker, Leia M; Lynch, Meghan; Cook, Brian E; Mah, Catherine L

    2017-10-01

    Population health interventions in the retail food environment, such as corner store interventions, aim to influence the kind of cues consumers receive so that they are more often directed toward healthier options. Research that addresses financial aspects of retail interventions, particularly using outcome measures such as store sales that are central to retail decision making, is limited. This study explored store sales over time and across product categories during a healthy corner store intervention in a lowincome neighbourhood in Toronto, Ontario. Sales data (from August 2014 to April 2015) were aggregated by product category and by day. We used Microsoft Excel pivot tables to summarize and visually present sales data. We conducted t-tests to examine differences in product category sales by "peak" versus "nonpeak" sales days. Overall store sales peaked on the days at the end of each month, aligned with the issuing of social assistance payments. Revenue spikes on peak sales days were driven predominantly by transit pass sales. On peak sales days, mean sales of nonnutritious snacks and cigarettes were marginally higher than on other days of the month. Finally, creative strategies to increase sales of fresh vegetables and fruits seemed to substantially increase revenue from these product categories. Store sales data is an important store-level metric of food environment intervention success. Furthermore, data-driven decision making by retailers can be important for tailoring interventions. Future interventions and research should consider partnerships and additional success metrics for retail food environment interventions in diverse Canadian contexts.

  7. Exploring sales data during a healthy corner store intervention in Toronto: the Food Retail Environments Shaping Health (FRESH) project

    PubMed Central

    Leia M., Minaker; Meghan, Lynch; Brian E., Cook; Catherine L., Mah

    2017-01-01

    Abstract Introduction: Population health interventions in the retail food environment, such as corner store interventions, aim to influence the kind of cues consumers receive so that they are more often directed toward healthier options. Research that addresses financial aspects of retail interventions, particularly using outcome measures such as store sales that are central to retail decision making, is limited. This study explored store sales over time and across product categories during a healthy corner store intervention in a lowincome neighbourhood in Toronto, Ontario. Methods: Sales data (from August 2014 to April 2015) were aggregated by product category and by day. We used Microsoft Excel pivot tables to summarize and visually present sales data. We conducted t-tests to examine differences in product category sales by “peak” versus “nonpeak” sales days. Results: Overall store sales peaked on the days at the end of each month, aligned with the issuing of social assistance payments. Revenue spikes on peak sales days were driven predominantly by transit pass sales. On peak sales days, mean sales of nonnutritious snacks and cigarettes were marginally higher than on other days of the month. Finally, creative strategies to increase sales of fresh vegetables and fruits seemed to substantially increase revenue from these product categories. Conclusion: Store sales data is an important store-level metric of food environment intervention success. Furthermore, data-driven decision making by retailers can be important for tailoring interventions. Future interventions and research should consider partnerships and additional success metrics for retail food environment interventions in diverse Canadian contexts. PMID:29043761

  8. Fish habitat degradation in U.S. reservoirs

    USGS Publications Warehouse

    Miranda, Leandro E.; Spickard, M.; Dunn, T.; Webb, K.M.; Aycock, J.N.; Hunt, K.

    2010-01-01

    As the median age of the thousands of large reservoirs (> 200 ha) in the United States tops 50, many are showing various signs of fish habitat degradation. Our goal was to identify major factors degrading fish habitat in reservoirs across the country, and to explore regional degradation patterns. An online survey including 14 metrics was scored on a 0 (no degradation) to 5 (high degradation) point scale by 221 fisheries scientists (92% response rate) to describe degradation in 482 reservoirs randomly distributed throughout the continental United States. The highest scored sources of degradation were lack of aquatic macrophytes (41% of the reservoirs scored as 4–5), lack or loss of woody debris (35% scored 4–5), mistimed water level fluctuations (34% scored 4–5), and sedimentation (31% scored 4–5). Factor analysis identified five primary degradation factors that accounted for most of the variability in the 14 degradation metrics. The factors reflected siltation, structural habitat, eutrophication, water regime, and aquatic plants. Three degradation factors were driven principally by in-reservoir processes, whereas the other two were driven by inputs from the watershed. A comparison across U.S. regions indicated significant geographical differences in degradation relative to the factors emphasized by each region. Reservoirs sometimes have been dismissed as unnatural and disruptive, but they are a product of public policy, a critical feature of landscapes, and they cannot be overlooked if managers are to effectively conserve river systems. Protection and restoration of reservoir habitats may be enhanced with a broader perspective that includes watershed management, in addition to in reservoir activities.

  9. Fish habitat degradation in U.S. reservoirs

    USGS Publications Warehouse

    Miranda, L.E.; Spickard, M.; Dunn, T.; Webb, K.M.; Aycock, J.N.; Hunt, K.

    2010-01-01

    As the median age of the thousands of large reservoirs (> 200 ha) in the United States tops 50, many are showing various signs of fish habitat degradation. Our goal was to identify major factors degrading fish habitat in reservoirs across the country, and to explore regional degradation patterns. An online survey including 14 metrics was scored on a 0 (no degradation) to 5 (high degradation) point scale by 221 fisheries scientists (92% response rate) to describe degradation in 482 reservoirs randomly distributed throughout the continental United States. The highest scored sources of degradation were lack of aquatic macrophytes (41% of the reservoirs scored as 4-5), lack or loss of woody debris (35% scored 4-5), mistimed water level fluctuations (34% scored 4-5), and sedimentation (31% scored 4-5). Factor analysis identified five primary degradation factors that accounted for most of the variability in the 14 degradation metrics. The factors reflected siltation, structural habitat, eutrophication, water regime, and aquatic plants. Three degradation factors were driven principally by in-reservoir processes, whereas the other two were driven by inputs from the watershed. A comparison across U.S. regions indicated significant geographical differences in degradation relative to the factors emphasized by each region. Reservoirs sometimes have been dismissed as unnatural and disruptive, but they are a product of public policy, a critical feature of landscapes, and they cannot be overlooked if managers are to effectively conserve river systems. Protection and restoration of reservoir habitats may be enhanced with a broader perspective that includes watershed management, in addition to in reservoir activities.

  10. Combining Knowledge and Data Driven Insights for Identifying Risk Factors using Electronic Health Records

    PubMed Central

    Sun, Jimeng; Hu, Jianying; Luo, Dijun; Markatou, Marianthi; Wang, Fei; Edabollahi, Shahram; Steinhubl, Steven E.; Daar, Zahra; Stewart, Walter F.

    2012-01-01

    Background: The ability to identify the risk factors related to an adverse condition, e.g., heart failures (HF) diagnosis, is very important for improving care quality and reducing cost. Existing approaches for risk factor identification are either knowledge driven (from guidelines or literatures) or data driven (from observational data). No existing method provides a model to effectively combine expert knowledge with data driven insight for risk factor identification. Methods: We present a systematic approach to enhance known knowledge-based risk factors with additional potential risk factors derived from data. The core of our approach is a sparse regression model with regularization terms that correspond to both knowledge and data driven risk factors. Results: The approach is validated using a large dataset containing 4,644 heart failure cases and 45,981 controls. The outpatient electronic health records (EHRs) for these patients include diagnosis, medication, lab results from 2003–2010. We demonstrate that the proposed method can identify complementary risk factors that are not in the existing known factors and can better predict the onset of HF. We quantitatively compare different sets of risk factors in the context of predicting onset of HF using the performance metric, the Area Under the ROC Curve (AUC). The combined risk factors between knowledge and data significantly outperform knowledge-based risk factors alone. Furthermore, those additional risk factors are confirmed to be clinically meaningful by a cardiologist. Conclusion: We present a systematic framework for combining knowledge and data driven insights for risk factor identification. We demonstrate the power of this framework in the context of predicting onset of HF, where our approach can successfully identify intuitive and predictive risk factors beyond a set of known HF risk factors. PMID:23304365

  11. Technical Note: A Feasibility Study of Using the Flat Panel Detector on Linac for the kV X-ray Generator Test.

    PubMed

    Cai, Bin; Dolly, Steven; Kamal, Gregory; Yaddanapudi, Sridhar; Sun, Baozhou; Goddu, S Murty; Mutic, Sasa; Li, Hua

    2018-04-28

    To investigate the feasibility of using kV flat panel detector on linac for consistency evaluations of kV X-ray generator performance. An in-house designed aluminum (Al) array phantom with six 9×9 cm 2 square regions having various thickness was proposed and used in this study. Through XML script-driven image acquisition, kV images with various acquisition settings were obtained using the kV flat panel detector. Utilizing pre-established baseline curves, the consistency of X-ray tube output characteristics including tube voltage accuracy, exposure accuracy and exposure linearity were assessed through image quality assessment metrics including ROI mean intensity, ROI standard deviation (SD) and noise power spectrums (NPS). The robustness of this method was tested on two linacs for a three-month period. With the proposed method, tube voltage accuracy can be verified through conscience check with a 2% tolerance and 2 kVp intervals for forty different kVp settings. The exposure accuracy can be tested with a 4% consistency tolerance for three mAs settings over forty kVp settings. The exposure linearity tested with three mAs settings achieved a coefficient of variation (CV) of 0.1. We proposed a novel approach that uses the kV flat panel detector available on linac for X-ray generator test. This approach eliminates the inefficiencies and variability associated with using third party QA detectors while enabling an automated process. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  12. FacetGist: Collective Extraction of Document Facets in Large Technical Corpora.

    PubMed

    Siddiqui, Tarique; Ren, Xiang; Parameswaran, Aditya; Han, Jiawei

    2016-10-01

    Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets ( e.g. , application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes.

  13. FacetGist: Collective Extraction of Document Facets in Large Technical Corpora

    PubMed Central

    Siddiqui, Tarique; Ren, Xiang; Parameswaran, Aditya; Han, Jiawei

    2017-01-01

    Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets (e.g., application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes. PMID:28210517

  14. Land surface phenology

    USGS Publications Warehouse

    Hanes, Jonathan M.; Liang, Liang; Morisette, Jeffrey T.

    2013-01-01

    Certain vegetation types (e.g., deciduous shrubs, deciduous trees, grasslands) have distinct life cycles marked by the growth and senescence of leaves and periods of enhanced photosynthetic activity. Where these types exist, recurring changes in foliage alter the reflectance of electromagnetic radiation from the land surface, which can be measured using remote sensors. The timing of these recurring changes in reflectance is called land surface phenology (LSP). During recent decades, a variety of methods have been used to derive LSP metrics from time series of reflectance measurements acquired by satellite-borne sensors. In contrast to conventional phenology observations, LSP metrics represent the timing of reflectance changes that are driven by the aggregate activity of vegetation within the areal unit measured by the satellite sensor and do not directly provide information about the phenology of individual plants, species, or their phenophases. Despite the generalized nature of satellite sensor-derived measurements, they have proven useful for studying changes in LSP associated with various phenomena. This chapter provides a detailed overview of the use of satellite remote sensing to monitor LSP. First, the theoretical basis for the application of satellite remote sensing to the study of vegetation phenology is presented. After establishing a theoretical foundation for LSP, methods of deriving and validating LSP metrics are discussed. This chapter concludes with a discussion of major research findings and current and future research directions.

  15. Consideration of reference points for the management of renewable resources under an adaptive management paradigm

    USGS Publications Warehouse

    Irwin, Brian J.; Conroy, Michael J.

    2013-01-01

    The success of natural resource management depends on monitoring, assessment and enforcement. In support of these efforts, reference points (RPs) are often viewed as critical values of management-relevant indicators. This paper considers RPs from the standpoint of objective-driven decision making in dynamic resource systems, guided by principles of structured decision making (SDM) and adaptive resource management (AM). During the development of natural resource policy, RPs have been variously treated as either ‘targets’ or ‘triggers’. Under a SDM/AM paradigm, target RPs correspond approximately to value-based objectives, which may in turn be either of fundamental interest to stakeholders or intermediaries to other central objectives. By contrast, trigger RPs correspond to decision rules that are presumed to lead to desirable outcomes (such as the programme targets). Casting RPs as triggers or targets within a SDM framework is helpful towards clarifying why (or whether) a particular metric is appropriate. Further, the benefits of a SDM/AM process include elucidation of underlying untested assumptions that may reveal alternative metrics for use as RPs. Likewise, a structured decision-analytic framework may also reveal that failure to achieve management goals is not because the metrics are wrong, but because the decision-making process in which they are embedded is insufficiently robust to uncertainty, is not efficiently directed at producing a resource objective, or is incapable of adaptation to new knowledge.

  16. Quantifying urban growth patterns in Hanoi using landscape expansion modes and time series spatial metrics

    PubMed Central

    Lepczyk, Christopher A.; Miura, Tomoaki; Fox, Jefferson M.

    2018-01-01

    Urbanization has been driven by various social, economic, and political factors around the world for centuries. Because urbanization continues unabated in many places, it is crucial to understand patterns of urbanization and their potential ecological and environmental impacts. Given this need, the objectives of our study were to quantify urban growth rates, growth modes, and resultant changes in the landscape pattern of urbanization in Hanoi, Vietnam from 1993 to 2010 and to evaluate the extent to which the process of urban growth in Hanoi conformed to the diffusion-coalescence theory. We analyzed the spatiotemporal patterns and dynamics of the built-up land in Hanoi using landscape expansion modes, spatial metrics, and a gradient approach. Urbanization was most pronounced in the periods of 2001–2006 and 2006–2010 at a distance of 10 to 35 km around the urban center. Over the 17 year period urban expansion in Hanoi was dominated by infilling and edge expansion growth modes. Our findings support the diffusion-coalescence theory of urbanization. The shift of the urban growth areas over time and the dynamic nature of the spatial metrics revealed important information about our understanding of the urban growth process and cycle. Furthermore, our findings can be used to evaluate urban planning policies and aid in urbanization issues in rapidly urbanizing countries. PMID:29734346

  17. Quantitative analysis of forest fragmentation in the atlantic forest reveals more threatened bird species than the current red list.

    PubMed

    Schnell, Jessica K; Harris, Grant M; Pimm, Stuart L; Russell, Gareth J

    2013-01-01

    Habitat loss and attendant fragmentation threaten the existence of many species. Conserving these species requires a straightforward and objective method that quantifies how these factors affect their survival. Therefore, we compared a variety of metrics that assess habitat fragmentation in bird ranges, using the geographical ranges of 127 forest endemic passerine birds inhabiting the Atlantic Forest of Brazil. A common, non-biological metric - cumulative area of size-ranked fragments within a species range - was misleading, as the least threatened species had the most habitat fragmentation. Instead, we recommend a modified version of metapopulation capacity. The metric links detailed spatial information on fragment sizes and spatial configuration to the birds' abilities to occupy and disperse across large areas (100,000+ km(2)). In the Atlantic Forest, metapopulation capacities were largely bimodal, in that most species' ranges had either low capacity (high risk of extinction) or high capacity (very small risk of extinction). This pattern persisted within taxonomically and ecologically homogenous groups, indicating that it is driven by fragmentation patterns and not differences in species ecology. Worryingly, we found IUCN considers some 28 of 58 species in the low metapopulation capacity cluster to not be threatened. We propose that assessing the effect of fragmentation will separate species more clearly into distinct risk categories than does a simple assessment of remaining habitat.

  18. Prognostics of Power Mosfets Under Thermal Stress Accelerated Aging Using Data-Driven and Model-Based Methodologies

    NASA Technical Reports Server (NTRS)

    Celaya, Jose; Saxena, Abhinav; Saha, Sankalita; Goebel, Kai F.

    2011-01-01

    An approach for predicting remaining useful life of power MOSFETs (metal oxide field effect transistor) devices has been developed. Power MOSFETs are semiconductor switching devices that are instrumental in electronics equipment such as those used in operation and control of modern aircraft and spacecraft. The MOSFETs examined here were aged under thermal overstress in a controlled experiment and continuous performance degradation data were collected from the accelerated aging experiment. Dieattach degradation was determined to be the primary failure mode. The collected run-to-failure data were analyzed and it was revealed that ON-state resistance increased as die-attach degraded under high thermal stresses. Results from finite element simulation analysis support the observations from the experimental data. Data-driven and model based prognostics algorithms were investigated where ON-state resistance was used as the primary precursor of failure feature. A Gaussian process regression algorithm was explored as an example for a data-driven technique and an extended Kalman filter and a particle filter were used as examples for model-based techniques. Both methods were able to provide valid results. Prognostic performance metrics were employed to evaluate and compare the algorithms.

  19. Seasonal patterns in stream periphyton fatty acids and community benthic algal composition in six high quality headwater streams

    USGS Publications Warehouse

    Honeyfield, Dale C.; Maloney, Kelly O.

    2015-01-01

    Fatty acids are integral components of periphyton and differ among algal taxa. We examined seasonal patterns in periphyton fatty acids in six minimally disturbed headwater streams in Pennsylvania’s Appalachian Mountains, USA. Environmental data and periphyton were collected across four seasons for fatty acid and algal taxa content. Non-metric multidimensional scaling ordination suggested significant seasonal differences in fatty acids; an ordination on algal composition revealed similar seasonal patterns, but with slightly weaker separation of summer and fall. Summer and fall fatty acid profiles were driven by temperature, overstory cover, and conductivity and winter profiles by measures of stream size. Ordination on algal composition suggested that summer and fall communities were driven by overstory and temperature, whereas winter communities were driven by velocity. The physiologically important fatty acid 18:3ω6 was highest in summer and fall. Winter samples had the highest 20:3ω3. Six saturated fatty acids differed among the seasons. Periphyton fatty acids profiles appeared to reflect benthic algal species composition. This suggests that periphyton fatty acid composition can be useful in characterizing basal food resources and stream water quality.

  20. Applying Thiessen Polygon Catchment Areas and Gridded Population Weights to Estimate Conflict-Driven Population Changes in South Sudan

    NASA Astrophysics Data System (ADS)

    Jordan, L.

    2017-10-01

    Recent violence in South Sudan produced significant levels of conflict-driven migration undermining the accuracy and utility of both national and local level population forecasts commonly used in demographic estimates, public health metrics and food security proxies. This article explores the use of Thiessen Polygons and population grids (Gridded Population of the World, WorldPop and LandScan) as weights for estimating the catchment areas for settlement locations that serve large populations of internally displaced persons (IDP), in order to estimate the county-level in- and out-migration attributable to conflict-driven displacement between 2014-2015. Acknowledging IDP totals improves internal population estimates presented by global population databases. Unlike other forecasts, which produce spatially uniform increases in population, accounting for displaced population reveals that 15 percent of counties (n = 12) increased in population over 20 percent, and 30 percent of counties (n = 24) experienced zero or declining population growth, due to internal displacement and refugee out-migration. Adopting Thiessen Polygon catchment zones for internal migration estimation can be applied to other areas with United Nations IDP settlement data, such as Yemen, Somalia, and Nigeria.

  1. Comparisons of Derived Metrics from Computed Tomography (CT) Scanned Images of Fluvial Sediment from Gravel-Bed Flume Experiments

    NASA Astrophysics Data System (ADS)

    Voepel, Hal; Ahmed, Sharif; Hodge, Rebecca; Leyland, Julian; Sear, David

    2016-04-01

    Uncertainty in bedload estimates for gravel bed rivers is largely driven by our inability to characterize arrangement, orientation and resultant forces of fluvial sediment in river beds. Water working of grains leads to structural differences between areas of the bed through particle sorting, packing, imbrication, mortaring and degree of bed armoring. In this study, non-destructive, micro-focus X-ray computed tomography (CT) imaging in 3D is used to visualize, quantify and assess the internal geometry of sections of a flume bed that have been extracted keeping their fabric intact. Flume experiments were conducted at 1:1 scaling of our prototype river. From the volume, center of mass, points of contact, and protrusion of individual grains derived from 3D scan data we estimate 3D static force properties at the grain-scale such as pivoting angles, buoyancy and gravity forces, and local grain exposure. Here metrics are derived for images from two flume experiments: one with a bed of coarse grains (>4mm) and the other where sand and clay were incorporated into the coarse flume bed. In addition to deriving force networks, comparison of metrics such as critical shear stress, pivot angles, grain distributions, principle axis orientation, and pore space over depth are made. This is the first time bed stability has been studied in 3D using CT scanned images of sediment from the bed surface to depths well into the subsurface. The derived metrics, inter-granular relationships and characterization of bed structures will lead to improved bedload estimates with reduced uncertainty, as well as improved understanding of relationships between sediment structure, grain size distribution and channel topography.

  2. Glacier Research Digital Science Communication Evolution 1996-2014

    NASA Astrophysics Data System (ADS)

    Pelto, M. S.

    2014-12-01

    This talk will focus on the changes in communicating science in the last 20 years from the perspective of the same research project. Essentially the rapid innovation in online communication requires the scientist learning and utilizing a new platform of communication each year. To maintain relevant visibility and ongoing research activities requires finding synergy between the two. I will discuss how digital communication has inspired my research efforts. This talk will also examine overall visitation and media impact metrics over this period. From developing a highly visible glacier research web page in 1996, to writing more than 400 blog posts since 2008, and in 2014 utilizing a videographer and illustration artist in the field, this is the story of one scientist's digital communication-media evolution. The three main observations are that: 1) Overall visitation has not expanded as rapidly in the last decade. 2) Contact and cooperation with colleagues has expanded quite rapidly since 2008. 3) Media impact peaked in 2005, but is nearing that peak again. The key factors in visibility and media impact for a "small market" research institution/project has been providing timely and detailed content to collaborative sites, such as RealClimate, BAMS State of the Climate, Climate Denial Crock of the Week, and Skeptical Science that can then be repurposed by the media. A review of the visitor metrics to the digital glacier sites I have maintained from 1996-2014 indicate visibility of each platform has a similar growth curve, transitioning to a plateau, but overall visitation does not increase in kind with the increase in number of platforms. Media metrics is more event driven and does not follow the visitor metric pattern.

  3. What do we know and when do we know it?

    PubMed Central

    2008-01-01

    Two essential aspects of virtual screening are considered: experimental design and performance metrics. In the design of any retrospective virtual screen, choices have to be made as to the purpose of the exercise. Is the goal to compare methods? Is the interest in a particular type of target or all targets? Are we simulating a ‘real-world’ setting, or teasing out distinguishing features of a method? What are the confidence limits for the results? What should be reported in a publication? In particular, what criteria should be used to decide between different performance metrics? Comparing the field of molecular modeling to other endeavors, such as medical statistics, criminology, or computer hardware evaluation indicates some clear directions. Taken together these suggest the modeling field has a long way to go to provide effective assessment of its approaches, either to itself or to a broader audience, but that there are no technical reasons why progress cannot be made. PMID:18253702

  4. Statistical Issues in the Comparison of Quantitative Imaging Biomarker Algorithms using Pulmonary Nodule Volume as an Example

    PubMed Central

    2014-01-01

    Quantitative imaging biomarkers (QIBs) are being used increasingly in medicine to diagnose and monitor patients’ disease. The computer algorithms that measure QIBs have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms’ bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms’ performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantages and limitations of various common statistical methods for QIB studies. PMID:24919828

  5. Establishment of the International Power Institute. Final technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Julius E. Coles

    The International Power Institute, in collaboration with American industries, seeks to address technical, political, economic and cultural issues of developing countries in the interest of facilitating profitable transactions in power related infrastructure projects. IPI works with universities, governments and commercial organizations to render project-specific recommendations for private-sector investment considerations. IPI also established the following goals: Facilitate electric power infrastructure transactions between developing countries and the US power industry; Collaborate with developing countries to identify development strategies to achieve energy stability; and Encourage market driven solutions and work collaboratively with other international trade energy, technology and banking organizations.

  6. Establishing ecological and social continuities: new challenges to optimize urban watershed management

    NASA Astrophysics Data System (ADS)

    Mitroi, V.; de Coninck, A.; Vinçon-Leite, B.; Deroubaix, J.-F.

    2014-09-01

    The (re)construction of the ecological continuity is stated as one of the main objectives of the European Water Framework Directive for watershed management in Europe. Analysing the social, political, technical and scientific processes characterising the implementation of different projects of ecological continuity in two adjacent peri-urban territories in Ile-de-France, we observed science-driven approaches disregarding the social contexts. We show that, in urbanized areas, ecological continuity requires not only important technical and ecological expertise, but also social and political participation to the definition of a common vision and action plan. Being a challenge for both, technical water management institutions and "classical" ecological policies, we propose some social science contributions to deal with ecological unpredictability and reconsider stakeholder resistance to this kind of project.

  7. The coevolution of innovation and technical intelligence in primates

    PubMed Central

    Street, Sally E.; Whalen, Andrew; Laland, Kevin N.

    2016-01-01

    In birds and primates, the frequency of behavioural innovation has been shown to covary with absolute and relative brain size, leading to the suggestion that large brains allow animals to innovate, and/or that selection for innovativeness, together with social learning, may have driven brain enlargement. We examined the relationship between primate brain size and both technical (i.e. tool using) and non-technical innovation, deploying a combination of phylogenetically informed regression and exploratory causal graph analyses. Regression analyses revealed that absolute and relative brain size correlated positively with technical innovation, and exhibited consistently weaker, but still positive, relationships with non-technical innovation. These findings mirror similar results in birds. Our exploratory causal graph analyses suggested that technical innovation shares strong direct relationships with brain size, body size, social learning rate and social group size, whereas non-technical innovation did not exhibit a direct relationship with brain size. Nonetheless, non-technical innovation was linked to brain size indirectly via diet and life-history variables. Our findings support ‘technical intelligence’ hypotheses in linking technical innovation to encephalization in the restricted set of primate lineages where technical innovation has been reported. Our findings also provide support for a broad co-evolving complex of brain, behaviour, life-history, social and dietary variables, providing secondary support for social and ecological intelligence hypotheses. The ability to gain access to difficult-to-extract, but potentially nutrient-rich, resources through tool use may have conferred on some primates adaptive advantages, leading to selection for brain circuitry that underlies technical proficiency. PMID:26926276

  8. The coevolution of innovation and technical intelligence in primates.

    PubMed

    Navarrete, Ana F; Reader, Simon M; Street, Sally E; Whalen, Andrew; Laland, Kevin N

    2016-03-19

    In birds and primates, the frequency of behavioural innovation has been shown to covary with absolute and relative brain size, leading to the suggestion that large brains allow animals to innovate, and/or that selection for innovativeness, together with social learning, may have driven brain enlargement. We examined the relationship between primate brain size and both technical (i.e. tool using) and non-technical innovation, deploying a combination of phylogenetically informed regression and exploratory causal graph analyses. Regression analyses revealed that absolute and relative brain size correlated positively with technical innovation, and exhibited consistently weaker, but still positive, relationships with non-technical innovation. These findings mirror similar results in birds. Our exploratory causal graph analyses suggested that technical innovation shares strong direct relationships with brain size, body size, social learning rate and social group size, whereas non-technical innovation did not exhibit a direct relationship with brain size. Nonetheless, non-technical innovation was linked to brain size indirectly via diet and life-history variables. Our findings support 'technical intelligence' hypotheses in linking technical innovation to encephalization in the restricted set of primate lineages where technical innovation has been reported. Our findings also provide support for a broad co-evolving complex of brain, behaviour, life-history, social and dietary variables, providing secondary support for social and ecological intelligence hypotheses. The ability to gain access to difficult-to-extract, but potentially nutrient-rich, resources through tool use may have conferred on some primates adaptive advantages, leading to selection for brain circuitry that underlies technical proficiency. © 2016 The Author(s).

  9. Design of pressure-driven microfluidic networks using electric circuit analogy.

    PubMed

    Oh, Kwang W; Lee, Kangsun; Ahn, Byungwook; Furlani, Edward P

    2012-02-07

    This article reviews the application of electric circuit methods for the analysis of pressure-driven microfluidic networks with an emphasis on concentration- and flow-dependent systems. The application of circuit methods to microfluidics is based on the analogous behaviour of hydraulic and electric circuits with correlations of pressure to voltage, volumetric flow rate to current, and hydraulic to electric resistance. Circuit analysis enables rapid predictions of pressure-driven laminar flow in microchannels and is very useful for designing complex microfluidic networks in advance of fabrication. This article provides a comprehensive overview of the physics of pressure-driven laminar flow, the formal analogy between electric and hydraulic circuits, applications of circuit theory to microfluidic network-based devices, recent development and applications of concentration- and flow-dependent microfluidic networks, and promising future applications. The lab-on-a-chip (LOC) and microfluidics community will gain insightful ideas and practical design strategies for developing unique microfluidic network-based devices to address a broad range of biological, chemical, pharmaceutical, and other scientific and technical challenges.

  10. A Case Study of Measuring Process Risk for Early Insights into Software Safety

    NASA Technical Reports Server (NTRS)

    Layman, Lucas; Basili, Victor; Zelkowitz, Marvin V.; Fisher, Karen L.

    2011-01-01

    In this case study, we examine software safety risk in three flight hardware systems in NASA's Constellation spaceflight program. We applied our Technical and Process Risk Measurement (TPRM) methodology to the Constellation hazard analysis process to quantify the technical and process risks involving software safety in the early design phase of these projects. We analyzed 154 hazard reports and collected metrics to measure the prevalence of software in hazards and the specificity of descriptions of software causes of hazardous conditions. We found that 49-70% of 154 hazardous conditions could be caused by software or software was involved in the prevention of the hazardous condition. We also found that 12-17% of the 2013 hazard causes involved software, and that 23-29% of all causes had a software control. The application of the TPRM methodology identified process risks in the application of the hazard analysis process itself that may lead to software safety risk.

  11. A historical analysis of the co-evolution of gasoline octane number and spark-ignition engines

    DOE PAGES

    Splitter, Derek A.; Pawlowski, Alex E.; Wagner, Robert M.

    2016-01-06

    In our work, the authors reviewed engine, vehicle, and fuel data since 1925 to examine the historical and recent coupling of compression ratio and fuel antiknock properties (i.e., octane number) in the U.S. light-duty vehicle market. The analysis identified historical timeframes, trends, and illustrated how three factors: consumer preferences, technical capabilities, and regulatory legislation, affect personal mobility. Data showed that throughout history these three factors have a complex and time sensitive interplay. Long term trends in the data were identified where interaction and evolution between all three factors was observed. Transportation efficiency per unit power (gal/ton-mi/hp) was found to bemore » a good metric to integrate technical, societal, and regulatory effects into the evolutional pathway of personal mobility. From this framework, discussions of future evolutionary changes to personal mobility are also presented.« less

  12. Concept Overview & Preliminary Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruth, Mark

    2017-07-12

    'H2@Scale' is an opportunity for wide-scale use of hydrogen as an intermediate that carries energy from various production options to multiple uses. It is based on identifying and developing opportunities for low-cost hydrogen production and investigating opportunities for using that hydrogen across the electricity, industrial, and transportation sectors. One of the key production opportunities is use of low-cost electricity that may be generated under high penetrations of variable renewable generators such as wind and solar photovoltaics. The technical potential demand for hydrogen across the sectors is 60 million metric tons per year. The U.S. has sufficient domestic renewable resources somore » that each could meet that demand and could readily meet the demand using a portfolio of generation options. This presentation provides an overview of the concept and the technical potential demand and resources. It also motivates analysis and research on H2@Scale.« less

  13. MO-FG-207-01: Technological Advances and Challenges: Experience with the First Integrated Whole-Body PET/MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laforest, R.

    2015-06-15

    The use of integrated PET/MRI systems in clinical applications can best benefit from understanding their technological advances and limitations. The currently available clinical PET/MRI systems have their own characteristics. Thorough analyses of existing technical data and evaluation of necessary performance metrics for quality assurances could be conducted to optimize application-specific PET/MRI protocols. This Symposium will focus on technical advances and limitations of clinical PET/MRI systems, and how this exciting imaging modality can be utilized in applications that can benefit from both PET and MRI. Learning Objectives: To understand the technological advances of clinical PET/MRI systems To correctly identify clinical applicationsmore » that can benefit from PET/MRI To understand ongoing work to further improve the current PET/MRI technology Floris Jansen is a GE Healthcare employee.« less

  14. Objective Assessment of Bimanual Laparoscopic Surgical Skills via Functional Near Infrared Spectroscopy (fNIRS)

    NASA Astrophysics Data System (ADS)

    Nemani, Arun

    Surgical simulators are effective methods for training and assessing surgical technical skills, particularly those that are bimanual. These simulators are now ubiquitous in surgical training and assessment programs for residents. Simulators are used in programs such as the Fundamentals of Laparoscopic Surgery (FLS) and Fundamentals of Endoscopic Surgery (FES), which are pre-requisites for Board certification in general surgery. Although these surgical simulators have been validated for clinical use, they have significant limitations, such as subjectivity in assessment metrics, poor correlation of transfer from simulation to clinically relevant environments, poor correlation of task performance scores to learning motor skill levels, and ultimately inconsistent reliability of these assessment methods as an indicator of positive patient outcomes. These limitations present an opportunity for more objective and analytical approaches to assess surgical motor skills. To address these surgical skill assessment limitations, we present functional near-infrared spectroscopic (fNIRS), a non-invasive brain imaging method, to objectively differentiate and classify subjects with varying degrees of laparoscopic surgical motor skill levels based on measurements of functional activation changes. In this work, we show that fNIRS based metrics can objectively differentiate and classify surgical motor skill levels with significantly more accuracy than established metrics. Using classification approaches such as multivariate linear discriminant analysis, we show evidence that fNIRS metrics reduce the misclassification error, defined as the probability that a trained subject is misclassified as an untrained subject and vice versa, from 53-61% to 4.2-4.4% compared to conventional metrics for surgical skill assessment. This evidence also translates to surgical skill transfer metrics, where such metrics assess surgical motor skill transfer from simulation to clinically relevant environments. Results indicate that fNIRS based metrics can successfully differentiate and classify surgical motor skill transfer levels by reducing the misclassification errors from 20-41 % to 2.2-9.1%, when compared to conventional surgical skill transfer assessment metrics. Furthermore, this work also shows evidence of high functional connectivity between the prefrontal cortex and primary motor cortex regions correlated to increases in surgical motor skill levels, addressing the gap in current literature in underlying neurophysiological responses to surgical motor skill learning. This work is the first to show conclusive evidence that fNIRS based metrics can significantly improve subject classification for surgical motor skill assessment compared to metrics currently used in Board certification in general surgery. Our approach brings robustness, objectivity, and accuracy in not only assessing surgical motor skill levels but also validating the effectiveness of future surgical trainers in assessing and translating surgical motor skills to more clinically relevant environments. This non-invasive imaging approach for objective quantification for complex bimanual surgical motor skills will bring about a paradigm change in surgical certification and assessment, that may lead to significantly reduced negative patient outcomes. Ultimately, this approach can be generally applied for bimanual motor skill assessment and can be applied for other fields, such as brain computer interfaces (BCI), robotics, stroke and rehabilitation therapy.

  15. Site systems engineering fiscal year 1999 multi-year work plan (MYWP) update for WBS 1.8.2.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    GRYGIEL, M.L.

    1998-10-08

    Manage the Site Systems Engineering process to provide a traceable integrated requirements-driven, and technically defensible baseline. Through the Site Integration Group(SIG), Systems Engineering ensures integration of technical activities across all site projects. Systems Engineering's primary interfaces are with the RL Project Managers, the Project Direction Office and with the Project Major Subcontractors, as well as with the Site Planning organization. Systems Implementation: (1) Develops, maintains, and controls the site integrated technical baseline, ensures the Systems Engineering interfaces between projects are documented, and maintain the Site Environmental Management Specification. (2) Develops and uses dynamic simulation models for verification of the baselinemore » and analysis of alternatives. (3) Performs and documents fictional and requirements analyses. (4) Works with projects, technology management, and the SIG to identify and resolve technical issues. (5) Supports technical baseline information for the planning and budgeting of the Accelerated Cleanup Plan, Multi-Year Work Plans, Project Baseline Summaries as well as performance measure reporting. (6) Works with projects to ensure the quality of data in the technical baseline. (7) Develops, maintains and implements the site configuration management system.« less

  16. Productivity and technical efficiency of suckler beef production systems: trends for the period 1990 to 2012.

    PubMed

    Veysset, P; Lherm, M; Roulenc, M; Troquier, C; Bébin, D

    2015-12-01

    Over the past 23 years (1990 to 2012), French beef cattle farms have expanded in size and increased labour productivity by over 60%, chiefly, though not exclusively, through capital intensification (labour-capital substitution) and simplifying herd feeding practices (more concentrates used). The technical efficiency of beef sector production systems, as measured by the ratio of the volume value (in constant euros) of farm output excluding aids to volume of intermediate consumption, has fallen by nearly 20% while income per worker has held stable thanks to subsidies and the labour productivity gains made. This aggregate technical efficiency of beef cattle systems is positively correlated to feed self-sufficiency, which is in turn negatively correlated to farm and herd size. While volume of farm output per hectare of agricultural area has not changed, forage feed self-sufficiency decreased by 6 percentage points. The continual increase in farm size and labour productivity has come at a cost of lower production-system efficiency - a loss of technical efficiency that 20 years of genetic, technical, technological and knowledge-driven progress has barely managed to offset.

  17. Gravitational instantons, self-duality, and geometric flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bourliot, F.; Estes, J.; Petropoulos, P. M.

    2010-05-15

    We discuss four-dimensional 'spatially homogeneous' gravitational instantons. These are self-dual solutions of Euclidean vacuum Einstein equations. They are endowed with a product structure RxM{sub 3} leading to a foliation into three-dimensional subspaces evolving in Euclidean time. For a large class of homogeneous subspaces, the dynamics coincides with a geometric flow on the three-dimensional slice, driven by the Ricci tensor plus an so(3) gauge connection. The flowing metric is related to the vielbein of the subspace, while the gauge field is inherited from the anti-self-dual component of the four-dimensional Levi-Civita connection.

  18. Innovations in Higher Education. Singapore at the Competitive Edge. World Bank Technical Paper No. 222.

    ERIC Educational Resources Information Center

    Selvaratnam, Viswanathan

    This report provides an overview of Singapore's strategy for human resource development and the country's concerted effort to successfully orchestrate the many separate initiatives between 1960s to 1980s into an effective market-driven, three-tiered higher education system emphasizing technology and modernization. The study highlights the…

  19. Teaching Standard Deviation by Building from Student Invention

    ERIC Educational Resources Information Center

    Day, James; Nakahara, Hiroko; Bonn, Doug

    2010-01-01

    First-year physics laboratories are often driven by a mix of goals that includes the illustration or discovery of basic physics principles and a myriad of technical skills involving specific equipment, data analysis, and report writing. The sheer number of such goals seems guaranteed to produce cognitive overload, even when highly detailed…

  20. Heavy Lift Helicopter - Prototype Technical Summary

    DTIC Science & Technology

    1980-04-01

    in an inte- grated design. The following paragraphs discuss the swash - plate actuator servo loops and provide details...instrumentation in the prototype aircraft. Development testing of the flight control module in conjunc- tion with the transmission-driven pump and the reservoir was...PFCS employed cockpit controllers and force-feel actuation developed in the ATC

  1. A Teaching Artist in Rural Schools: Sowing Seeds for Creative Expression

    ERIC Educational Resources Information Center

    Kuper, Kate

    2006-01-01

    For nearly 25 years, the author has driven the highways and back roads of Illinois, teaching dance to school children, leading family programs, performing lecture/demonstrations, and choreographing technically simple, conceptually complex pieces with and for young dancers. The author's home base is Champaign/Urbana (C/U), twin cities with a…

  2. Migrating Legacy Systems in the Global Merger & Acquisition Environment

    ERIC Educational Resources Information Center

    Katerattanakul, Pairin; Kam, Hwee-Joo; Lee, James J.; Hong, Soongoo

    2009-01-01

    The MetaFrame system migration project at WorldPharma, while driven by merger and acquisition, had faced complexities caused by both technical challenges and organizational issues in the climate of uncertainties. However, WorldPharma still insisted on instigating this post-merger system migration project. This project served to (1) consolidate the…

  3. A Socio-Technical Analysis of Knowledgeable Practice in Radiation Therapy

    ERIC Educational Resources Information Center

    Lozano, Reynaldo Garza

    2012-01-01

    The role of the modern radiation therapist is directed and driven by the organizational system. Changes affecting their role are implemented as a response to changes in the industry. Operations of the modern cancer center, with new and changing treatment technologies bring questions regarding the learning process of radiation therapists at a time…

  4. Starting from Scratch: Greening Your Game Day--The Collegiate Football Sustainable Materials Management Toolkit. Version 1.0

    ERIC Educational Resources Information Center

    Association for the Advancement of Sustainability in Higher Education, 2011

    2011-01-01

    The "Collegiate Football Sustainable Materials Management Toolkit" was researched by student interns in the Virginia Tech Office of Energy & Sustainability, developed in collaboration with the US EPA (US Environmental Protection Agency) and a national panel of technical experts from universities across the nation, and driven forward…

  5. Optimizing Technical Education Pathways: Does Dual-Credit Course Completion Predict Students' College and Labor Market Success?

    ERIC Educational Resources Information Center

    Phelps, L. Allen; Chan, Hsun-Yu

    2016-01-01

    Post-recession Federal policy initiatives, such as secondary/postsecondary career pathways and gainful employment higher education accountability standards, prioritize the alignment of education practices with market-driven outcomes. Using longitudinal student record data merged from college and state K-12 data systems with the Unemployment…

  6. Utilising Six Sigma for Improving Pass Percentage of Students: A Technical Institute Case Study

    ERIC Educational Resources Information Center

    Kaushik, Prabhakar; Khanduja, Dinesh

    2010-01-01

    Service sector accounts for a substantial share in Indian economy and among the service industries, education sector is emerging as a major commercial activity in the nation. Globalization, growing competition among institutions, emergence of new technologies, changing socio-economic profiles of nations and knowledge driven economies have created…

  7. Plasma Instabilities and Transport in the MPD Thruster

    DTIC Science & Technology

    1993-06-01

    driven plasma accelera- tion vesrus current-deiven energy dissipation Part III: anomalous trasnport . In 2 8’A Joint Propulsion Conference, Nashville... trasnport In the March/April Bi- monthly Progress Report of the Electric Propulsion and Plasma Dynamics Laboratory. Technical Report MAE 1776.36, EPPDyL, Princeton Univer- sity, 1992. 0 0

  8. Reference-free ground truth metric for metal artifact evaluation in CT images.

    PubMed

    Kratz, Bärbel; Ens, Svitlana; Müller, Jan; Buzug, Thorsten M

    2011-07-01

    In computed tomography (CT), metal objects in the region of interest introduce data inconsistencies during acquisition. Reconstructing these data results in an image with star shaped artifacts induced by the metal inconsistencies. To enhance image quality, the influence of the metal objects can be reduced by different metal artifact reduction (MAR) strategies. For an adequate evaluation of new MAR approaches a ground truth reference data set is needed. In technical evaluations, where phantoms can be measured with and without metal inserts, ground truth data can easily be obtained by a second reference data acquisition. Obviously, this is not possible for clinical data. Here, an alternative evaluation method is presented without the need of an additionally acquired reference data set. The proposed metric is based on an inherent ground truth for metal artifacts as well as MAR methods comparison, where no reference information in terms of a second acquisition is needed. The method is based on the forward projection of a reconstructed image, which is compared to the actually measured projection data. The new evaluation technique is performed on phantom and on clinical CT data with and without MAR. The metric results are then compared with methods using a reference data set as well as an expert-based classification. It is shown that the new approach is an adequate quantification technique for artifact strength in reconstructed metal or MAR CT images. The presented method works solely on the original projection data itself, which yields some advantages compared to distance measures in image domain using two data sets. Beside this, no parameters have to be manually chosen. The new metric is a useful evaluation alternative when no reference data are available.

  9. Weather-Corrected Performance Ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dierauf, T.; Growitz, A.; Kurtz, S.

    Photovoltaic (PV) system performance depends on both the quality of the system and the weather. One simple way to communicate the system performance is to use the performance ratio (PR): the ratio of the electricity generated to the electricity that would have been generated if the plant consistently converted sunlight to electricity at the level expected from the DC nameplate rating. The annual system yield for flat-plate PV systems is estimated by the product of the annual insolation in the plane of the array, the nameplate rating of the system, and the PR, which provides an attractive way to estimatemore » expected annual system yield. Unfortunately, the PR is, again, a function of both the PV system efficiency and the weather. If the PR is measured during the winter or during the summer, substantially different values may be obtained, making this metric insufficient to use as the basis for a performance guarantee when precise confidence intervals are required. This technical report defines a way to modify the PR calculation to neutralize biases that may be introduced by variations in the weather, while still reporting a PR that reflects the annual PR at that site given the project design and the project weather file. This resulting weather-corrected PR gives more consistent results throughout the year, enabling its use as a metric for performance guarantees while still retaining the familiarity this metric brings to the industry and the value of its use in predicting actual annual system yield. A testing protocol is also presented to illustrate the use of this new metric with the intent of providing a reference starting point for contractual content.« less

  10. Remote sensing for restoration ecology: Application for restoring degraded, damaged, transformed, or destroyed ecosystems.

    PubMed

    Reif, Molly K; Theel, Heather J

    2017-07-01

    Restoration monitoring is generally perceived as costly and time consuming, given the assumptions of successfully restoring ecological functions and services of a particular ecosystem or habitat. Opportunities exist for remote sensing to bolster the restoration science associated with a wide variety of injured resources, including resources affected by fire, hydropower operations, chemical releases, and oil spills, among others. In the last decade, the role of remote sensing to support restoration monitoring has increased, in part due to the advent of high-resolution satellite sensors as well as other sensor technology, such as lidar. Restoration practitioners in federal agencies require monitoring standards to assess restoration performance of injured resources. This review attempts to address a technical need and provides an introductory overview of spatial data and restoration metric considerations, as well as an in-depth review of optical (e.g., spaceborne, airborne, unmanned aerial vehicles) and active (e.g., radar, lidar) sensors and examples of restoration metrics that can be measured with remotely sensed data (e.g., land cover, species or habitat type, change detection, quality, degradation, diversity, and pressures or threats). To that end, the present article helps restoration practitioners assemble information not only about essential restoration metrics but also about the evolving technological approaches that can be used to best assess them. Given the need for monitoring standards to assess restoration success of injured resources, a universal monitoring framework should include a range of remote sensing options with which to measure common restoration metrics. Integr Environ Assess Manag 2017;13:614-630. Published 2016. This article is a US Government work and is in the public domain in the USA. Published 2016. This article is a US Government work and is in the public domain in the USA.

  11. Spatiotemporal trends in ground-level ozone concentrations and metrics in France over the time period 1999-2012.

    PubMed

    Sicard, Pierre; Serra, Romain; Rossello, Philippe

    2016-08-01

    The hourly ozone (O3) data from 332 background monitoring stations, spread in France, were analyzed over the period 1999-2012 and short-term trends were calculated. In the current climate change context, the calculation of human health- and vegetation-relevant metrics, and of associated trends, provides a consistent method to establish proper and effective policies to reduce the adverse O3 effects. The generation of optimal O3 maps, for risk and exposure assessment, is challenging. To overcome this issue, starting from a set of stations, a hybrid regression-interpolation approach was proposed. Annual surface O3 metrics, O3 human health metrics (number of exceedances of daily maximum 8-h values greater than 60 ppb and SOMO35) and O3 vegetation impact metrics (AOT40 for vegetation and forests) were investigated at individual sites. Citizens are more exposed to high O3 levels in rural areas than people living in the cities. The annual mean concentrations decreased by -0.12ppbyear(-1) at rural stations, and the significant reduction at 67% of stations, particularly during the warm season, in the number of episodic high O3 concentrations (e.g. 98th percentile, -0.19ppbyear(-1)) can be associated with the substantial reductions in NOx and VOCs emissions in the EU-28 countries since the early 1990s Inversely, the O3 background level is rising at 76% of urban sites (+0.14ppbyear(-1)), particularly during the cold period. This rise can be attributed to increases in imported O3 by long-range transport and to a low O3 titration by NO due to the reduction in local NOx emissions. The decrease in health-related and vegetation-relevant O3 metrics, at almost all stations, is driven by decreases in regional photochemical O3 formation and in peak O3 concentrations. The short-term trends highlight that the threat to population and vegetation declined between 1999 and 2012 in France, demonstrating the success of European control strategies over the last 20 years. However, for all exposure metrics, the issue of non-attainment of the target value for O3 persists in comparison with the objectives of air quality directives. The region at highest O3 risk is the South-eastern France. This study contains new information on the i) spatial distribution of surface O3 concentration, ii) exceedances and iii) trends to define more suitable standards for human health and environmental protection in France. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Towards conformal loop quantum gravity

    NASA Astrophysics Data System (ADS)

    H-T Wang, Charles

    2006-03-01

    A discussion is given of recent developments in canonical gravity that assimilates the conformal analysis of gravitational degrees of freedom. The work is motivated by the problem of time in quantum gravity and is carried out at the metric and the triad levels. At the metric level, it is shown that by extending the Arnowitt-Deser-Misner (ADM) phase space of general relativity (GR), a conformal form of geometrodynamics can be constructed. In addition to the Hamiltonian and Diffeomorphism constraints, an extra first class constraint is introduced to generate conformal transformations. This phase space consists of York's mean extrinsic curvature time, conformal three-metric and their momenta. At the triad level, the phase space of GR is further enlarged by incorporating spin-gauge as well as conformal symmetries. This leads to a canonical formulation of GR using a new set of real spin connection variables. The resulting gravitational constraints are first class, consisting of the Hamiltonian constraint and the canonical generators for spin-gauge and conformorphism transformations. The formulation has a remarkable feature of being parameter-free. Indeed, it is shown that a conformal parameter of the Barbero-Immirzi type can be absorbed by the conformal symmetry of the extended phase space. This gives rise to an alternative approach to loop quantum gravity that addresses both the conceptual problem of time and the technical problem of functional calculus in quantum gravity.

  13. Status of NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Honeycutt, John; Cook, Jerry; Lyles, Garry

    2016-01-01

    NASA's Space Launch System (SLS) continued to make significant progress in 2015, completing hardware and testing that brings NASA closer to a new era of deep space exploration. The most significant program milestone of the year was completion of Critical Design Review (CDR). A team of independent reviewers concluded that the vehicle design is technically and programmatically ready to move to Design Certification Review (DCR) and launch readiness in 2018. Just four years after program start, every major element has amassed development and flight hardware and completed key tests that will set the stage for a growing schedule of manufacturing and testing in 2016. Key to SLS' rapid progress has been the use of existing technologies adapted to the new launch vehicle. The space shuttle-heritage RS-25 engine is undergoing adaptation tests to prove it can meet SLS requirements and environments with minimal change. The four-segment shuttle-era booster has been modified and updated with an additional propellant segment, new insulation, and new avionics. The Interim Cryogenic Upper Stage is a modified version of an existing upper stage. The first Block I SLS configuration will launch a minimum of 70 metric tons of payload to low Earth orbit (LEO). The vehicle architecture has a clear evolutionary path to more than 100 metric tons and, ultimately, to 130 metric tons. Among the program's major accomplishments in 2015 were the first booster qualification hotfire test, a series of seven RS-25 adaptation hotfire tests, manufacturing of most of the major components for both core stage test articles and first flight tank, delivery of the Pegasus core stage barge, and the upper stage simulator. Renovations to the B-2 test stand for stage green run testing was completed at NASA Stennis Space Center. This year will see the second booster qualification motor hotfire, flight and additional development RS-25 engine tests, and completion of core stage test articles and test stands and several flight article sections. This paper will discuss these and other technical and programmatic successes and challenges over the past year and provide a preview of work ahead before the first flight of this new capability.

  14. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  15. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  16. Non-Functional Property Driven Service Governance: Performance Implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yan; Zhu, Liming; Bass, Len

    2007-09-17

    Service governance is a set of businesses processes, policies and technical solutions that support enterprises in their implementation and management of their SOA. The decisions of service governance, especially concerning service boundaries at the enterprise level, influence the deployment topology of business services across or within business organizations. Deployment topologies are realized by integration technologies such as Enterprise Service Bus (ESB). Service governance and technical solutions interact in a subtle way including through communication patterns and protocols between services and ESBs, as well as the deployment and configuration of ESB. These factors have a strong influence on the Non- Functionalmore » Properties (NFP) of a SOA solution. A systematic approach is essential to understand alternative technical solutions for a specific service governance decision. This paper proposes a modeling approach to evaluate the performance-related NFP impacts when mapping service governance to technical solutions using an ESB. This approach is illustrated by the quantitative performance analysis of a real« less

  17. Data-driven planning of distributed energy resources amidst socio-technical complexities

    NASA Astrophysics Data System (ADS)

    Jain, Rishee K.; Qin, Junjie; Rajagopal, Ram

    2017-08-01

    New distributed energy resources (DER) are rapidly replacing centralized power generation due to their environmental, economic and resiliency benefits. Previous analyses of DER systems have been limited in their ability to account for socio-technical complexities, such as intermittent supply, heterogeneous demand and balance-of-system cost dynamics. Here we develop ReMatch, an interdisciplinary modelling framework, spanning engineering, consumer behaviour and data science, and apply it to 10,000 consumers in California, USA. Our results show that deploying DER would yield nearly a 50% reduction in the levelized cost of electricity (LCOE) over the status quo even after accounting for socio-technical complexities. We abstract a detailed matching of consumers to DER infrastructure from our results and discuss how this matching can facilitate the development of smart and targeted renewable energy policies, programmes and incentives. Our findings point to the large-scale economic and technical feasibility of DER and underscore the pertinent role DER can play in achieving sustainable energy goals.

  18. Caring as emancipatory nursing praxis: the theory of relational caring complexity.

    PubMed

    Ray, Marilyn A; Turkel, Marian C

    2014-01-01

    In the culture of health care, nurses are challenged to understand their values and beliefs as humanistic within complex technical and economically driven bureaucratic systems. This article outlines the language of social justice and human rights and the advance of a Theory of Relational Caring Complexity, which offers insights into caring as emancipatory nursing praxis. Recommendations provide knowledge of the struggle to balance economics, technology, and caring. As nurses practice from a value-driven, philosophical, and ethical social justice framework, they will find "their voice" and realize the full potential that the power of caring has on patient and organizational outcomes.

  19. Wind power demonstration and siting problems. [for recharging electrically driven automobiles

    NASA Technical Reports Server (NTRS)

    Bergey, K. H.

    1973-01-01

    Technical and economic feasibility studies on a small windmill to provide overnight charging for an electrically driven car are reported. The auxiliary generator provides power for heating and cooling the vehicle which runs for 25 miles on battery power alone, and for 50 miles with the onboard charger operating. The blades for this windmill have a diameter of 12 feet and are coupled through to a conventional automobile alternator so that they are able to completely recharge car batteries in 8 hours. Optimization of a windmill/storage system requires detailed wind velocity information which permits rational sitting of wind power system stations.

  20. Is Time the Best Metric to Measure Carbon-Related Climate Change Potential and Tune the Economy Toward Reduced Fossil Carbon Extraction?

    NASA Astrophysics Data System (ADS)

    DeGroff, F. A.

    2016-12-01

    Anthropogenic changes to non-anthropogenic carbon fluxes are a primary driver of climate change. There currently exists no comprehensive metric to measure and value anthropogenic changes in carbon flux between all states of carbon. Focusing on atmospheric carbon emissions as a measure of anthropogenic activity on the environment ignores the fungible characteristics of carbon that are crucial in both the biosphere and the worldwide economy. Focusing on a single form of inorganic carbon as a proxy metric for the plethora of anthropogenic activity and carbon compounds will prove inadequate, convoluted, and unmanageable. A broader, more basic metric is needed to capture the entirety of carbon activity, particularly in an economic, profit-driven environment. We propose a new metric to measure changes in the temporal distance of any form or state of carbon from one state to another. Such a metric would be especially useful to measure the temporal distance of carbon from sinks such as the atmosphere or oceans. The effect of changes in carbon flux as a result of any human activity can be measured by the difference between the anthropogenic and non-anthropogenic temporal distance. The change in the temporal distance is a measure of the climate change potential much like voltage is a measure of electrical potential. The integral of the climate change potential is proportional to the anthropogenic climate change. We also propose a logarithmic vector scale for carbon quality, cq, as a measure of anthropogenic changes in carbon flux. The distance between the cq vector starting and ending temporal distances represents the change in cq. A base-10 logarithmic scale would allow the addition and subtraction of exponents to calculate changes in cq. As anthropogenic activity changes the temporal distance of carbon, the change in cq is measured as: cq = ß ( log10 [mean carbon temporal distance] ) where ß represents the carbon price coefficient for a particular country. For any country, cq measures the climate change potential for any domestic anthropogenic activity that results in a change in temporal distance of any carbon. The greater the carbon fees for a country, the larger the ß coefficient would be, and the greater the import fees would be to achieve carbon parity on imports from countries with lower carbon fees.

  1. Virtual Reality Simulation for the Operating Room

    PubMed Central

    Gallagher, Anthony G.; Ritter, E Matt; Champion, Howard; Higgins, Gerald; Fried, Marvin P.; Moses, Gerald; Smith, C Daniel; Satava, Richard M.

    2005-01-01

    Summary Background Data: To inform surgeons about the practical issues to be considered for successful integration of virtual reality simulation into a surgical training program. The learning and practice of minimally invasive surgery (MIS) makes unique demands on surgical training programs. A decade ago Satava proposed virtual reality (VR) surgical simulation as a solution for this problem. Only recently have robust scientific studies supported that vision Methods: A review of the surgical education, human-factor, and psychology literature to identify important factors which will impinge on the successful integration of VR training into a surgical training program. Results: VR is more likely to be successful if it is systematically integrated into a well-thought-out education and training program which objectively assesses technical skills improvement proximate to the learning experience. Validated performance metrics should be relevant to the surgical task being trained but in general will require trainees to reach an objectively determined proficiency criterion, based on tightly defined metrics and perform at this level consistently. VR training is more likely to be successful if the training schedule takes place on an interval basis rather than massed into a short period of extensive practice. High-fidelity VR simulations will confer the greatest skills transfer to the in vivo surgical situation, but less expensive VR trainers will also lead to considerably improved skills generalizations. Conclusions: VR for improved performance of MIS is now a reality. However, VR is only a training tool that must be thoughtfully introduced into a surgical training curriculum for it to successfully improve surgical technical skills. PMID:15650649

  2. Training safer orthopedic surgeons. Construct validation of a virtual-reality simulator for hip fracture surgery.

    PubMed

    Akhtar, Kashif; Sugand, Kapil; Sperrin, Matthew; Cobb, Justin; Standfield, Nigel; Gupte, Chinmay

    2015-01-01

    Virtual-reality (VR) simulation in orthopedic training is still in its infancy, and much of the work has been focused on arthroscopy. We evaluated the construct validity of a new VR trauma simulator for performing dynamic hip screw (DHS) fixation of a trochanteric femoral fracture. 30 volunteers were divided into 3 groups according to the number of postgraduate (PG) years and the amount of clinical experience: novice (1-4 PG years; less than 10 DHS procedures); intermediate (5-12 PG years; 10-100 procedures); expert (> 12 PG years; > 100 procedures). Each participant performed a DHS procedure and objective performance metrics were recorded. These data were analyzed with each performance metric taken as the dependent variable in 3 regression models. There were statistically significant differences in performance between groups for (1) number of attempts at guide-wire insertion, (2) total fluoroscopy time, (3) tip-apex distance, (4) probability of screw cutout, and (5) overall simulator score. The intermediate group performed the procedure most quickly, with the lowest fluoroscopy time, the lowest tip-apex distance, the lowest probability of cutout, and the highest simulator score, which correlated with their frequency of exposure to running the trauma lists for hip fracture surgery. This study demonstrates the construct validity of a haptic VR trauma simulator with surgeons undertaking the procedure most frequently performing best on the simulator. VR simulation may be a means of addressing restrictions on working hours and allows trainees to practice technical tasks without putting patients at risk. The VR DHS simulator evaluated in this study may provide valid assessment of technical skill.

  3. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  4. Growth patterns for shape-shifting elastic bilayers.

    PubMed

    van Rees, Wim M; Vouga, Etienne; Mahadevan, L

    2017-10-31

    Inspired by the differential-growth-driven morphogenesis of leaves, flowers, and other tissues, there is increasing interest in artificial analogs of these shape-shifting thin sheets made of active materials that respond to environmental stimuli such as heat, light, and humidity. But how can we determine the growth patterns to achieve a given shape from another shape? We solve this geometric inverse problem of determining the growth factors and directions (the metric tensors) for a given isotropic elastic bilayer to grow into a target shape by posing and solving an elastic energy minimization problem. A mathematical equivalence between bilayers and curved monolayers simplifies the inverse problem considerably by providing algebraic expressions for the growth metric tensors in terms of those of the final shape. This approach also allows us to prove that we can grow any target surface from any reference surface using orthotropically growing bilayers. We demonstrate this by numerically simulating the growth of a flat sheet into a face, a cylindrical sheet into a flower, and a flat sheet into a complex canyon-like structure.

  5. Growth patterns for shape-shifting elastic bilayers

    PubMed Central

    van Rees, Wim M.; Vouga, Etienne; Mahadevan, L.

    2017-01-01

    Inspired by the differential-growth-driven morphogenesis of leaves, flowers, and other tissues, there is increasing interest in artificial analogs of these shape-shifting thin sheets made of active materials that respond to environmental stimuli such as heat, light, and humidity. But how can we determine the growth patterns to achieve a given shape from another shape? We solve this geometric inverse problem of determining the growth factors and directions (the metric tensors) for a given isotropic elastic bilayer to grow into a target shape by posing and solving an elastic energy minimization problem. A mathematical equivalence between bilayers and curved monolayers simplifies the inverse problem considerably by providing algebraic expressions for the growth metric tensors in terms of those of the final shape. This approach also allows us to prove that we can grow any target surface from any reference surface using orthotropically growing bilayers. We demonstrate this by numerically simulating the growth of a flat sheet into a face, a cylindrical sheet into a flower, and a flat sheet into a complex canyon-like structure. PMID:29078336

  6. [Insert Your Science Here] Week: Creating science-driven public awareness campaigns

    NASA Astrophysics Data System (ADS)

    Mattson, Barbara; Mitchell, Sara; McElvery, Raleigh; Reddy, Francis; Wiessinger, Scott; Skelly, Clare; Saravia, Claire; Straughn, Amber N.; Washington, Dewayne

    2018-01-01

    NASA Goddard’s in-house Astrophysics Communications Team is responsible for facilitating the production of traditional and social media products to provide understanding and inspiration about NASA’s astrophysics missions and discoveries. Our team is largely driven by the scientific news cycle of launches, mission milestones, anniversaries, and discoveries, which can leave a number of topics behind, waiting for a discovery to be highlighted. These overlooked topics include compelling stories about ongoing research, underlying science, and science not tied to a specific mission. In looking for a way to boost coverage of these unsung topics, we struck upon an idea of creating “theme weeks” which bring together the broader scientific community around a topic, object, or scientific concept. This poster will present the first two of our Goddard-led theme weeks: Pulsar Week and Dark Energy Week. We will describe the efforts involved, our metrics, and the benefits and challenges we encountered. We will also suggest a template for doing this for your own science based on our successes.

  7. Data-Driven Simulation-Enhanced Optimization of People-Based Print Production Service

    NASA Astrophysics Data System (ADS)

    Rai, Sudhendu

    This paper describes a systematic six-step data-driven simulation-based methodology for optimizing people-based service systems on a large distributed scale that exhibit high variety and variability. The methodology is exemplified through its application within the printing services industry where it has been successfully deployed by Xerox Corporation across small, mid-sized and large print shops generating over 250 million in profits across the customer value chain. Each step of the methodology consisting of innovative concepts co-development and testing in partnership with customers, development of software and hardware tools to implement the innovative concepts, establishment of work-process and practices for customer-engagement and service implementation, creation of training and infrastructure for large scale deployment, integration of the innovative offering within the framework of existing corporate offerings and lastly the monitoring and deployment of the financial and operational metrics for estimating the return-on-investment and the continual renewal of the offering are described in detail.

  8. Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks

    PubMed Central

    Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan

    2017-01-01

    Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H2RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H2RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller. PMID:28672856

  9. PSA: A program to streamline orbit determination for launch support operations

    NASA Technical Reports Server (NTRS)

    Legerton, V. N.; Mottinger, N. A.

    1988-01-01

    An interactive, menu driven computer program was written to streamline the orbit determination process during the critical launch support phase of a mission. Residing on a virtual memory minicomputer, this program retains the quantities in-core needed to obtain a least squares estimate of the spacecraft trajectory with interactive displays to assist in rapid radio metric data evaluation. Menu-driven displays allow real time filter and data strategy development. Graphical and tabular displays can be sent to a laser printer for analysis without exiting the program. Products generated by this program feed back to the main orbit determination program in order to further refine the estimate of the trajectory. The final estimate provides a spacecraft ephemeris which is transmitted to the mission control center and used for antenna pointing and frequency predict generation by the Deep Space Network. The development and implementation process of this program differs from that used for most other navigation software by allowing the users to check important operating features during development and have changes made as needed.

  10. METRICS DEVELOPMENT FOR THE QUALIS OF SOFTWARE TECHNICAL PRODUCTION.

    PubMed

    Scarpi, Marinho Jorge

    2015-01-01

    To recommend metrics to qualify software production and to propose guidelines for the CAPES quadrennial evaluation of the Post-Graduation Programs of Medicine III about this issue. Identification of the development process quality features, of the product attributes and of the software use, determined by Brazilian Association of Technical Standards (ABNT), International Organization Standardization (ISO) and International Electrotechnical (IEC), important in the perspective of the CAPES Medicine III Area correlate users, basing the creation proposal of metrics aiming to be used on four-year evaluation of Medicine III. The in use software quality perception by the user results from the provided effectiveness, productivity, security and satisfaction that originate from its characteristics of functionality, reliability, usability, efficiency, maintainability and portability (in use metrics quality). This perception depends on the specific use scenario. The software metrics should be included in the intellectual production of the program, considering the system behavior measurements results obtained by users' performance evaluation through out the favorable responses punctuation sum for the six in use metrics quality (27 sub-items, 0 to 2 points each) and for quality perception proof (four items, 0 to 10 points each). It will be considered as very good (VG) 85 to 94 points; good (G) 75 to 84 points; regular (R) 65 to 74 points; weak (W) 55 to 64 points; poor (P) <55 points. Recomendar métrica para qualificar a produção de software propondo diretrizes para a avaliação dos Programas de Pós-Graduação da Medicina III. Identificação das características de qualidade para o processo de desenvolvimento, para os atributos do produto e para o uso de software, determinadas pela Associação Brasileira de Normas Técnicas (ABNT), International Organization Standardization (ISO) e International Electrotechnical (IEC), importantes na perspectiva dos usuários correlatos da Área Medicina III da CAPES, embasando a criação de proposta para métrica do tema, com vistas à avaliação quadrienal dos cursos de pós-graduação. A percepção de qualidade em uso do software pelo usuário resulta da efetividade, produtividade, segurança e satisfação proporcionada, que têm origem nas suas características de funcionalidade, confiabilidade, usabilidade, eficiência, manutenibilidade e portabilidade (métricas de qualidade em uso). Tal percepção depende do cenário de uso específico. A métrica de software deve ser incluída na produção intelectual do Documento de Área do programa, ponderando os resultados nas medidas de comportamento do sistema em avaliação de desempenho por usuários, considerando a somatória da pontuação favorável para as seis métricas de qualidade em uso (27 sub-itens, de 0 a 2 pontos cada) e a comprovação da percepção de qualidade (quatro itens, de 0 a 10 pontos cada). Será considerado muito bom (MB) de 85 a 94 pontos; bom (B) de 75 a 84 pontos; Regular (R) de 65 a 74 pontos; fraco (F) de 55 a 64 pontos; deficiente (D) < 55.

  11. Economic Metrics for Commercial Reusable Space Transportation Systems

    NASA Technical Reports Server (NTRS)

    Shaw, Eric J.; Hamaker, Joseph (Technical Monitor)

    2000-01-01

    The success of any effort depends upon the effective initial definition of its purpose, in terms of the needs to be satisfied and the goals to be fulfilled. If the desired product is "A System" that is well-characterized, these high-level need and goal statements can be transformed into system requirements by traditional systems engineering techniques. The satisfaction of well-designed requirements can be tracked by fairly straightforward cost, schedule, and technical performance metrics. Unfortunately, some types of efforts, including those that NASA terms "Programs," tend to resist application of traditional systems engineering practices. In the NASA hierarchy of efforts, a "Program" is often an ongoing effort with broad, high-level goals and objectives. A NASA "project" is a finite effort, in terms of budget and schedule, that usually produces or involves one System. Programs usually contain more than one project and thus more than one System. Special care must be taken in the formulation of NASA Programs and their projects, to ensure that lower-level project requirements are traceable to top-level Program goals, feasible with the given cost and schedule constraints, and measurable against top-level goals. NASA Programs and projects are tasked to identify the advancement of technology as an explicit goal, which introduces more complicating factors. The justification for funding of technology development may be based on the technology's applicability to more than one System, Systems outside that Program or even external to NASA. Application of systems engineering to broad-based technology development, leading to effective measurement of the benefits, can be valid, but it requires that potential beneficiary Systems be organized into a hierarchical structure, creating a "system of Systems." In addition, these Systems evolve with the successful application of the technology, which creates the necessity for evolution of the benefit metrics to reflect the changing baseline. Still, economic metrics for technology development in these Programs and projects remain fairly straightforward, being based on reductions in acquisition and operating costs of the Systems. One of the most challenging requirements that NASA levies on its Programs is to plan for the commercialization of the developed technology. Some NASA Programs are created for the express purpose of developing technology for a particular industrial sector, such as aviation or space transportation, in financial partnership with that sector. With industrial investment, another set of goals, constraints and expectations are levied on the technology program. Economic benefit metrics then expand beyond cost and cost savings to include the marketability, profit, and investment return requirements of the private sector. Commercial investment criteria include low risk, potential for high return, and strategic alignment with existing product lines. These corporate criteria derive from top-level strategic plans and investment goals, which rank high among the most proprietary types of information in any business. As a result, top-level economic goals and objectives that industry partners bring to cooperative programs cannot usually be brought into technical processes, such as systems engineering, that are worked collaboratively between Industry and Government. In spite of these handicaps, the top-level economic goals and objectives of a joint technology program can be crafted in such a way that they accurately reflect the fiscal benefits from both Industry and Government perspectives. Valid economic metrics can then be designed that can track progress toward these goals and objectives, while maintaining the confidentiality necessary for the competitive process.

  12. Predictive value of background experiences and visual spatial ability testing on laparoscopic baseline performance among residents entering postgraduate surgical training.

    PubMed

    Louridas, Marisa; Quinn, Lauren E; Grantcharov, Teodor P

    2016-03-01

    Emerging evidence suggests that despite dedicated practice, not all surgical trainees have the ability to reach technical competency in minimally invasive techniques. While selecting residents that have the ability to reach technical competence is important, evidence to guide the incorporation of technical ability into selection processes is limited. Therefore, the purpose of the present study was to evaluate whether background experiences and 2D-3D visual spatial test results are predictive of baseline laparoscopic skill for the novice surgical trainee. First-year residents were studied. Demographic data and background surgical and non-surgical experiences were obtained using a questionnaire. Visual spatial ability was evaluated using the PicSOr, cube comparison (CC) and card rotation (CR) tests. Technical skill was assessed using the camera navigation (LCN) task and laparoscopic circle cut (LCC) task. Resident performance on these technical tasks was compared and correlated with the questionnaire and visual spatial findings. Previous experience in observing laparoscopic procedures was associated with significantly better LCN performance, and experience in navigating the laparoscopic camera was associated with significantly better LCC task results. Residents who scored higher on the CC test demonstrated a more accurate LCN path length score (r s(PL) = -0.36, p = 0.03) and angle path (r s(AP) = -0.426, p = 0.01) score when completing the LCN task. No other significant correlations were found between the visual spatial tests (PicSOr, CC or CR) and LCC performance. While identifying selection tests for incoming surgical trainees that predict technical skill performance is appealing, the surrogate markers evaluated correlate with specific metrics of surgical performance related to a single task but do not appear to reliably predict technical performance of different laparoscopic tasks. Predicting the acquisition of technical skills will require the development of a series of evidence-based tests that measure a number of innate abilities as well as their inherent interactions.

  13. An architecture for a continuous, user-driven, and data-driven application of clinical guidelines and its evaluation.

    PubMed

    Shalom, Erez; Shahar, Yuval; Lunenfeld, Eitan

    2016-02-01

    Design, implement, and evaluate a new architecture for realistic continuous guideline (GL)-based decision support, based on a series of requirements that we have identified, such as support for continuous care, for multiple task types, and for data-driven and user-driven modes. We designed and implemented a new continuous GL-based support architecture, PICARD, which accesses a temporal reasoning engine, and provides several different types of application interfaces. We present the new architecture in detail in the current paper. To evaluate the architecture, we first performed a technical evaluation of the PICARD architecture, using 19 simulated scenarios in the preeclampsia/toxemia domain. We then performed a functional evaluation with the help of two domain experts, by generating patient records that simulate 60 decision points from six clinical guideline-based scenarios, lasting from two days to four weeks. Finally, 36 clinicians made manual decisions in half of the scenarios, and had access to the automated GL-based support in the other half. The measures used in all three experiments were correctness and completeness of the decisions relative to the GL. Mean correctness and completeness in the technical evaluation were 1±0.0 and 0.96±0.03 respectively. The functional evaluation produced only several minor comments from the two experts, mostly regarding the output's style; otherwise the system's recommendations were validated. In the clinically oriented evaluation, the 36 clinicians applied manually approximately 41% of the GL's recommended actions. Completeness increased to approximately 93% when using PICARD. Manual correctness was approximately 94.5%, and remained similar when using PICARD; but while 68% of the manual decisions included correct but redundant actions, only 3% of the actions included in decisions made when using PICARD were redundant. The PICARD architecture is technically feasible and is functionally valid, and addresses the realistic continuous GL-based application requirements that we have defined; in particular, the requirement for care over significant time frames. The use of the PICARD architecture in the domain we examined resulted in enhanced completeness and in reduction of redundancies, and is potentially beneficial for general GL-based management of chronic patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Genetic Programming for Automatic Hydrological Modelling

    NASA Astrophysics Data System (ADS)

    Chadalawada, Jayashree; Babovic, Vladan

    2017-04-01

    One of the recent challenges for the hydrologic research community is the need for the development of coupled systems that involves the integration of hydrologic, atmospheric and socio-economic relationships. This poses a requirement for novel modelling frameworks that can accurately represent complex systems, given, the limited understanding of underlying processes, increasing volume of data and high levels of uncertainity. Each of the existing hydrological models vary in terms of conceptualization and process representation and is the best suited to capture the environmental dynamics of a particular hydrological system. Data driven approaches can be used in the integration of alternative process hypotheses in order to achieve a unified theory at catchment scale. The key steps in the implementation of integrated modelling framework that is influenced by prior understanding and data, include, choice of the technique for the induction of knowledge from data, identification of alternative structural hypotheses, definition of rules, constraints for meaningful, intelligent combination of model component hypotheses and definition of evaluation metrics. This study aims at defining a Genetic Programming based modelling framework that test different conceptual model constructs based on wide range of objective functions and evolves accurate and parsimonious models that capture dominant hydrological processes at catchment scale. In this paper, GP initializes the evolutionary process using the modelling decisions inspired from the Superflex framework [Fenicia et al., 2011] and automatically combines them into model structures that are scrutinized against observed data using statistical, hydrological and flow duration curve based performance metrics. The collaboration between data driven and physical, conceptual modelling paradigms improves the ability to model and manage hydrologic systems. Fenicia, F., D. Kavetski, and H. H. Savenije (2011), Elements of a flexible approach for conceptual hydrological modeling: 1. Motivation and theoretical development, Water Resources Research, 47(11).

  15. AnaBench: a Web/CORBA-based workbench for biomolecular sequence analysis

    PubMed Central

    Badidi, Elarbi; De Sousa, Cristina; Lang, B Franz; Burger, Gertraud

    2003-01-01

    Background Sequence data analyses such as gene identification, structure modeling or phylogenetic tree inference involve a variety of bioinformatics software tools. Due to the heterogeneity of bioinformatics tools in usage and data requirements, scientists spend much effort on technical issues including data format, storage and management of input and output, and memorization of numerous parameters and multi-step analysis procedures. Results In this paper, we present the design and implementation of AnaBench, an interactive, Web-based bioinformatics Analysis workBench allowing streamlined data analysis. Our philosophy was to minimize the technical effort not only for the scientist who uses this environment to analyze data, but also for the administrator who manages and maintains the workbench. With new bioinformatics tools published daily, AnaBench permits easy incorporation of additional tools. This flexibility is achieved by employing a three-tier distributed architecture and recent technologies including CORBA middleware, Java, JDBC, and JSP. A CORBA server permits transparent access to a workbench management database, which stores information about the users, their data, as well as the description of all bioinformatics applications that can be launched from the workbench. Conclusion AnaBench is an efficient and intuitive interactive bioinformatics environment, which offers scientists application-driven, data-driven and protocol-driven analysis approaches. The prototype of AnaBench, managed by a team at the Université de Montréal, is accessible on-line at: . Please contact the authors for details about setting up a local-network AnaBench site elsewhere. PMID:14678565

  16. The Widest Practicable Dissemination: The NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Binkley, Robert L.; Kellogg, Yvonne D.; Paulson, Sharon S.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael J.; Accomazzi, Alberto

    1995-01-01

    The National Aeronautics and Space Act of 1958 established NASA and charged it to "provide for the widest practicable and appropriate dissemination of information concerning [...] its activities and the results thereof." The search for innovative methods to distribute NASA s information lead a grass-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. The NTRS is comprised of several units, some constructed especially for inclusion in NTRS, and others that are existing NASA publication services that NTRS reuses. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the services over the initial 6-month period. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained will allow NASA to ensure that its institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  17. The widest practicable dissemination: The NASA technical report server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Binkley, Robert L.; Kellogg, Yvonne D.; Paulson, Sharon S.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael J.; Accomazzi, Alberto

    1995-01-01

    The National Aeronautics and Space Act of 1958 established NASA and charged it to 'provide for the widest practicable and appropriate dissemination of information concerning...its activities and the results thereof.' The search for innovative methods to distribute NASA's information lead a grass-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. The NTRS is comprised of several units, some constructed especially for inclusion in NTRS, and others that are existing NASA publication services that NTRS reuses. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the services over the initial six-month period. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained will allow NASA to ensure that its institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  18. Apparently low reproducibility of true differential expression discoveries in microarray studies.

    PubMed

    Zhang, Min; Yao, Chen; Guo, Zheng; Zou, Jinfeng; Zhang, Lin; Xiao, Hui; Wang, Dong; Yang, Da; Gong, Xue; Zhu, Jing; Li, Yanhui; Li, Xia

    2008-09-15

    Differentially expressed gene (DEG) lists detected from different microarray studies for a same disease are often highly inconsistent. Even in technical replicate tests using identical samples, DEG detection still shows very low reproducibility. It is often believed that current small microarray studies will largely introduce false discoveries. Based on a statistical model, we show that even in technical replicate tests using identical samples, it is highly likely that the selected DEG lists will be very inconsistent in the presence of small measurement variations. Therefore, the apparently low reproducibility of DEG detection from current technical replicate tests does not indicate low quality of microarray technology. We also demonstrate that heterogeneous biological variations existing in real cancer data will further reduce the overall reproducibility of DEG detection. Nevertheless, in small subsamples from both simulated and real data, the actual false discovery rate (FDR) for each DEG list tends to be low, suggesting that each separately determined list may comprise mostly true DEGs. Rather than simply counting the overlaps of the discovery lists from different studies for a complex disease, novel metrics are needed for evaluating the reproducibility of discoveries characterized with correlated molecular changes. Supplementaty information: Supplementary data are available at Bioinformatics online.

  19. An Approach to Formalizing Ontology Driven Semantic Integration: Concepts, Dimensions and Framework

    ERIC Educational Resources Information Center

    Gao, Wenlong

    2012-01-01

    The ontology approach has been accepted as a very promising approach to semantic integration today. However, because of the diversity of focuses and its various connections to other research domains, the core concepts, theoretical and technical approaches, and research areas of this domain still remain unclear. Such ambiguity makes it difficult to…

  20. At the Altar of Educational Efficiency: Performativity and the Role of the Teacher

    ERIC Educational Resources Information Center

    Hennessy, Jennifer; McNamara, Patricia Mannix

    2013-01-01

    This paper critiques the impact of neo-liberalism on postprimary education, and in particular on the teaching of English. The paper explores the implications of performativity and exam-driven schooling on the teaching and learning of poetry. The authors argue that meeting the demands of an education system dominated by technicism and…

  1. DoD CIO Annual Information Assurance Report

    DTIC Science & Technology

    2000-04-01

    cyber - warfare group, or a cyber-terrorist driven by ideology, religion, or money. The new warfighter is the cyber-warrior with technical and non-traditional skills. Complicating this new dimension is the need for the Department of Defense (DoD) to change its defensive strategy, because of cost and complexity issues, from the risk-avoidance approach to the risk management

  2. Watershed Central: Harnessing a social media tool to organize local technical knowledge and find the right watershed resources for your watershed

    EPA Science Inventory

    Watershed Central was developed to be a bridge between sharing and searching for information relating to watershed issues. This is dependent upon active user support through additions and updates to the Watershed Central Wiki. Since the wiki is user driven, the content and applic...

  3. Studies Of Vibrations In Gearboxes

    NASA Technical Reports Server (NTRS)

    Choy, Fred K.; Ruan, Yeefeng F.; Tu, Yu K.; Zakrajsek, James J.; Oswald, Fred B.; Coy, John J.; Townsend, Dennis P.

    1994-01-01

    Three NASA technical memorandums summarize studies of vibrations in gearboxes. Directed toward understanding and reducing gearbox noise caused by coupling of vibrations from meshing gears, through gear shafts and their bearings, to surfaces of gearbox housings. Practical systems in which understanding and reduction of gearbox noise beneficial include helicopter, car, and truck transmissions; stationary geared systems; and gear-driven actuator systems.

  4. Racism? Administrative and Community Perspectives in Data-Driven Decision Making: Systemic Perspectives versus Technical-Rational Perspectives

    ERIC Educational Resources Information Center

    Khalifa, Muhammad A.; Jennings, Michael E.; Briscoe, Felecia; Oleszweski, Ashley M.; Abdi, Nimo

    2014-01-01

    This case study describes tensions that became apparent between community members and school administrators after a proposal to close a historically African American public high school in a large urban Southwestern city. When members of the city's longstanding African American community responded with outrage, the school district's senior…

  5. CTE Policy Past, Present, and Future: Driving Forces behind the Evolution of Federal Priorities

    ERIC Educational Resources Information Center

    Imperatore, Catherine; Hyslop, Alisha

    2017-01-01

    Federal legislation has driven and been receptive to the vision of a rigorous, relevant career and technical education (CTE) system integrated with academics and aligned across middle school, secondary school, and postsecondary education. This article uses a social policy analysis approach to trace the history of federal CTE policy throughout the…

  6. Resisting the Lure of Technology-Driven Design: Pedagogical Approaches to Visual Communication

    ERIC Educational Resources Information Center

    Northcut, Kathryn M.; Brumberger, Eva R.

    2010-01-01

    Technical communicators are expected to work extensively with visual texts in workplaces. Fortunately, most academic curricula include courses in which the skills necessary for such tasks are introduced and sometimes developed in depth. We identify a tension between a focus on technological skill vs. a focus on principles and theory, arguing that…

  7. Application of Universal Design for Learning in Corporate Technical Training Design: A Quantitative Study

    ERIC Educational Resources Information Center

    Irbe, Aina G.

    2016-01-01

    With the rise of a globalized economy and an overall increase in online learning, corporate organizations have increased training through the online environment at a rapid pace. Providing effective training the employee can immediately apply to the job has driven a need to improve online training programs. Numerous studies have identified that the…

  8. Cognitive skills training in digital era: A paradigm shift in surgical education using the TaTME model.

    PubMed

    Knol, Joep; Keller, Deborah S

    2018-04-30

    Surgical competence is a complex, multifactorial process, requiring ample time and training. Optimal training is based on acquiring knowledge and psychomotor and cognitive skills. Practicing surgical skills is one of the most crucial tasks for both the novice surgeon learning new procedures and surgeons already in practice learning new techniques. Focus is placed on teaching traditional technical skills, but the importance of cognitive skills cannot be underestimated. Cognitive skills allow recognizing environmental cues to improve technical performance including situational awareness, mental readiness, risk assessment, anticipating problems, decision-making, adaptation, and flexibility, and may also accelerate the trainee's understanding of a procedure, formalize the steps being practiced, and reduce the overall training time to become technically proficient. The introduction and implementation of the transanal total mesorectal excision (TaTME) into practice may be the best demonstration of this new model of teaching and training, including pre-training, course attendance, and post-course guidance on technical and cognitive skills. To date, the TaTME framework has been the ideal model for structured training to ensure safe implementation. Further development of metrics to grade successful learning and assessment of long term outcomes with the new pathway will confirm the success of this training model. Copyright © 2018 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. All rights reserved.

  9. SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics

    PubMed Central

    Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis

    2015-01-01

    Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most “useful” or “interesting”. The two major obstacles in recommending interesting visualizations are (a) scale: evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility: identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics. PMID:26779379

  10. SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.

    PubMed

    Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis

    2015-09-01

    Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.

  11. Fronto-Parietal Subnetworks Flexibility Compensates For Cognitive Decline Due To Mental Fatigue.

    PubMed

    Taya, Fumihiko; Dimitriadis, Stavros I; Dragomir, Andrei; Lim, Julian; Sun, Yu; Wong, Kian Foong; Thakor, Nitish V; Bezerianos, Anastasios

    2018-04-24

    Fronto-parietal subnetworks were revealed to compensate for cognitive decline due to mental fatigue by community structure analysis. Here, we investigate changes in topology of subnetworks of resting-state fMRI networks due to mental fatigue induced by prolonged performance of a cognitively demanding task, and their associations with cognitive decline. As it is well established that brain networks have modular organization, community structure analyses can provide valuable information about mesoscale network organization and serve as a bridge between standard fMRI approaches and brain connectomics that quantify the topology of whole brain networks. We developed inter- and intramodule network metrics to quantify topological characteristics of subnetworks, based on our hypothesis that mental fatigue would impact on functional relationships of subnetworks. Functional networks were constructed with wavelet correlation and a data-driven thresholding scheme based on orthogonal minimum spanning trees, which allowed detection of communities with weak connections. A change from pre- to posttask runs was found for the intermodule density between the frontal and the temporal subnetworks. Seven inter- or intramodule network metrics, mostly at the frontal or the parietal subnetworks, showed significant predictive power of individual cognitive decline, while the network metrics for the whole network were less effective in the predictions. Our results suggest that the control-type fronto-parietal networks have a flexible topological architecture to compensate for declining cognitive ability due to mental fatigue. This community structure analysis provides valuable insight into connectivity dynamics under different cognitive states including mental fatigue. © 2018 Wiley Periodicals, Inc.

  12. Robotics-based synthesis of human motion.

    PubMed

    Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S

    2009-01-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  13. Linear associations between clinically assessed upper motor neuron disease and diffusion tensor imaging metrics in amyotrophic lateral sclerosis.

    PubMed

    Woo, John H; Wang, Sumei; Melhem, Elias R; Gee, James C; Cucchiara, Andrew; McCluskey, Leo; Elman, Lauren

    2014-01-01

    To assess the relationship between clinically assessed Upper Motor Neuron (UMN) disease in Amyotrophic Lateral Sclerosis (ALS) and local diffusion alterations measured in the brain corticospinal tract (CST) by a tractography-driven template-space region-of-interest (ROI) analysis of Diffusion Tensor Imaging (DTI). This cross-sectional study included 34 patients with ALS, on whom DTI was performed. Clinical measures were separately obtained including the Penn UMN Score, a summary metric based upon standard clinical methods. After normalizing all DTI data to a population-specific template, tractography was performed to determine a region-of-interest (ROI) outlining the CST, in which average Mean Diffusivity (MD) and Fractional Anisotropy (FA) were estimated. Linear regression analyses were used to investigate associations of DTI metrics (MD, FA) with clinical measures (Penn UMN Score, ALSFRS-R, duration-of-disease), along with age, sex, handedness, and El Escorial category as covariates. For MD, the regression model was significant (p = 0.02), and the only significant predictors were the Penn UMN Score (p = 0.005) and age (p = 0.03). The FA regression model was also significant (p = 0.02); the only significant predictor was the Penn UMN Score (p = 0.003). Measured by the template-space ROI method, both MD and FA were linearly associated with the Penn UMN Score, supporting the hypothesis that DTI alterations reflect UMN pathology as assessed by the clinical examination.

  14. EarthCube Activities: Community Engagement Advancing Geoscience Research

    NASA Astrophysics Data System (ADS)

    Kinkade, D.

    2015-12-01

    Our ability to advance scientific research in order to better understand complex Earth systems, address emerging geoscience problems, and meet societal challenges is increasingly dependent upon the concept of Open Science and Data. Although these terms are relatively new to the world of research, Open Science and Data in this context may be described as transparency in the scientific process. This includes the discoverability, public accessibility and reusability of scientific data, as well as accessibility and transparency of scientific communication (www.openscience.org). Scientists and the US government alike are realizing the critical need for easy discovery and access to multidisciplinary data to advance research in the geosciences. The NSF-supported EarthCube project was created to meet this need. EarthCube is developing a community-driven common cyberinfrastructure for the purpose of accessing, integrating, analyzing, sharing and visualizing all forms of data and related resources through advanced technological and computational capabilities. Engaging the geoscience community in EarthCube's development is crucial to its success, and EarthCube is providing several opportunities for geoscience involvement. This presentation will provide an overview of the activities EarthCube is employing to entrain the community in the development process, from governance development and strategic planning, to technical needs gathering. Particular focus will be given to the collection of science-driven use cases as a means of capturing scientific and technical requirements. Such activities inform the development of key technical and computational components that collectively will form a cyberinfrastructure to meet the research needs of the geoscience community.

  15. Six sigma tools for a patient safety-oriented, quality-checklist driven radiation medicine department.

    PubMed

    Kapur, Ajay; Potters, Louis

    2012-01-01

    The purpose of this work was to develop and implement six sigma practices toward the enhancement of patient safety in an electronic, quality checklist-driven, multicenter, paperless radiation medicine department. A quality checklist process map (QPM), stratified into consultation through treatment-completion stages was incorporated into an oncology information systems platform. A cross-functional quality management team conducted quality-function-deployment and define-measure-analyze-improve-control (DMAIC) six sigma exercises with a focus on patient safety. QPM procedures were Pareto-sorted in order of decreasing patient safety risk with failure mode and effects analysis (FMEA). Quantitative metrics for a grouped set of highest risk procedures were established. These included procedural delays, associated standard deviations and six sigma Z scores. Baseline performance of the QPM was established over the previous year of usage. Data-driven analysis led to simplification, standardization, and refinement of the QPM with standard deviation, slip-day reduction, and Z-score enhancement goals. A no-fly policy (NFP) for patient safety was introduced at the improve-control DMAIC phase, with a process map interlock imposed on treatment initiation in the event of FMEA-identified high-risk tasks being delayed or not completed. The NFP was introduced in a pilot phase with specific stopping rules and the same metrics used for performance assessments. A custom root-cause analysis database was deployed to monitor patient safety events. Relative to the baseline period, average slip days and standard deviations for the risk-enhanced QPM procedures improved by over threefold factors in the NFP period. The Z scores improved by approximately 20%. A trend for proactive delays instead of reactive hard stops was observed with no adverse effects of the NFP. The number of computed potential no-fly delays per month dropped from 60 to 20 over a total of 520 cases. The fraction of computed potential no-fly cases that were delayed in NFP compliance rose from 28% to 45%. Proactive delays rose to 80% of all delayed cases. For potential no-fly cases, event reporting rose from 18% to 50%, while for actually delayed cases, event reporting rose from 65% to 100%. With complex technologies, resource-compromised staff, and pressures to hasten treatment initiation, the use of the six sigma driven process interlocks may mitigate potential patient safety risks as demonstrated in this study. Copyright © 2012 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  16. Automatic Keyword Extraction from Individual Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.

    2010-05-03

    This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.

  17. Multi-Threaded DNA Tag/Anti-Tag Library Generator for Multi-Core Platforms

    DTIC Science & Technology

    2009-05-01

    base pair)  Watson ‐ Crick  strand pairs that bind perfectly within pairs, but poorly across pairs. A variety  of  DNA  strand hybridization metrics...AFRL-RI-RS-TR-2009-131 Final Technical Report May 2009 MULTI-THREADED DNA TAG/ANTI-TAG LIBRARY GENERATOR FOR MULTI-CORE PLATFORMS...TYPE Final 3. DATES COVERED (From - To) Jun 08 – Feb 09 4. TITLE AND SUBTITLE MULTI-THREADED DNA TAG/ANTI-TAG LIBRARY GENERATOR FOR MULTI-CORE

  18. Automated interferometric synthetic aperture microscopy and computational adaptive optics for improved optical coherence tomography.

    PubMed

    Xu, Yang; Liu, Yuan-Zhi; Boppart, Stephen A; Carney, P Scott

    2016-03-10

    In this paper, we introduce an algorithm framework for the automation of interferometric synthetic aperture microscopy (ISAM). Under this framework, common processing steps such as dispersion correction, Fourier domain resampling, and computational adaptive optics aberration correction are carried out as metrics-assisted parameter search problems. We further present the results of this algorithm applied to phantom and biological tissue samples and compare with manually adjusted results. With the automated algorithm, near-optimal ISAM reconstruction can be achieved without manual adjustment. At the same time, the technical barrier for the nonexpert using ISAM imaging is also significantly lowered.

  19. One-dimensional sections of exotic spacetimes with superconducting circuits

    NASA Astrophysics Data System (ADS)

    Sabín, Carlos

    2018-05-01

    We introduce analogue quantum simulations of 1 + 1 dimensional sections of exotic 3 + 1 dimensional spacetimes, such as Alcubierre warp-drive spacetime, Gödel rotating universe and Kerr highly-rotating black hole metric. Suitable magnetic flux profiles along a SQUID array embedded in a superconducting transmission line allow to generate an effective spatiotemporal dependence in the speed of light, which is able to mimic the corresponding light propagation in a dimensionally-reduced exotic spacetime. In each case, we discuss the technical constraints and the links with possible chronology protection mechanisms and we find the optimal region of parameters for the experimental implementation.

  20. A Rain Taxonomy for Degraded Visual Environment Mitigation

    NASA Technical Reports Server (NTRS)

    Gatlin, P. N.; Petersen, W. A.

    2018-01-01

    This Technical Memorandum (TM) provides a description of a rainfall taxonomy that defines the detailed characteristics of naturally occurring rainfall. The taxonomy is based on raindrop size measurements collected around the globe and encompasses several different climate types. Included in this TM is a description of these rainfall observations, an explanation of methods used to process those data, and resultant metrics comprising the rain taxonomy database. Each of the categories in the rain taxonomy are characterized by a unique set of raindrop sizes that can be used in simulations of electromagnetic wave propagation through a rain medium.

Top