Sample records for security metrics model

  1. Information risk and security modeling

    NASA Astrophysics Data System (ADS)

    Zivic, Predrag

    2005-03-01

    This research paper presentation will feature current frameworks to addressing risk and security modeling and metrics. The paper will analyze technical level risk and security metrics of Common Criteria/ISO15408, Centre for Internet Security guidelines, NSA configuration guidelines and metrics used at this level. Information IT operational standards view on security metrics such as GMITS/ISO13335, ITIL/ITMS and architectural guidelines such as ISO7498-2 will be explained. Business process level standards such as ISO17799, COSO and CobiT will be presented with their control approach to security metrics. Top level, the maturity standards such as SSE-CMM/ISO21827, NSA Infosec Assessment and CobiT will be explored and reviewed. For each defined level of security metrics the research presentation will explore the appropriate usage of these standards. The paper will discuss standards approaches to conducting the risk and security metrics. The research findings will demonstrate the need for common baseline for both risk and security metrics. This paper will show the relation between the attribute based common baseline and corporate assets and controls for risk and security metrics. IT will be shown that such approach spans over all mentioned standards. The proposed approach 3D visual presentation and development of the Information Security Model will be analyzed and postulated. Presentation will clearly demonstrate the benefits of proposed attributes based approach and defined risk and security space for modeling and measuring.

  2. On the Use of Software Metrics as a Predictor of Software Security Problems

    DTIC Science & Technology

    2013-01-01

    models to determine if additional metrics are required to increase the accuracy of the model: non-security SCSA warnings, code churn and size, the...vulnerabilities reported by testing and those found in the field. Summary of Most Important Results We evaluated our model on three commercial telecommunications

  3. Measuring Information Security: Guidelines to Build Metrics

    NASA Astrophysics Data System (ADS)

    von Faber, Eberhard

    Measuring information security is a genuine interest of security managers. With metrics they can develop their security organization's visibility and standing within the enterprise or public authority as a whole. Organizations using information technology need to use security metrics. Despite the clear demands and advantages, security metrics are often poorly developed or ineffective parameters are collected and analysed. This paper describes best practices for the development of security metrics. First attention is drawn to motivation showing both requirements and benefits. The main body of this paper lists things which need to be observed (characteristic of metrics), things which can be measured (how measurements can be conducted) and steps for the development and implementation of metrics (procedures and planning). Analysis and communication is also key when using security metrics. Examples are also given in order to develop a better understanding. The author wants to resume, continue and develop the discussion about a topic which is or increasingly will be a critical factor of success for any security managers in larger organizations.

  4. Security Metrics: A Solution in Search of a Problem

    ERIC Educational Resources Information Center

    Rosenblatt, Joel

    2008-01-01

    Computer security is one of the most complicated and challenging fields in technology today. A security metrics program provides a major benefit: looking at the metrics on a regular basis offers early clues to changes in attack patterns or environmental factors that may require changes in security strategy. The term "security metrics"…

  5. Developing a Security Metrics Scorecard for Healthcare Organizations.

    PubMed

    Elrefaey, Heba; Borycki, Elizabeth; Kushniruk, Andrea

    2015-01-01

    In healthcare, information security is a key aspect of protecting a patient's privacy and ensuring systems availability to support patient care. Security managers need to measure the performance of security systems and this can be achieved by using evidence-based metrics. In this paper, we describe the development of an evidence-based security metrics scorecard specific to healthcare organizations. Study participants were asked to comment on the usability and usefulness of a prototype of a security metrics scorecard that was developed based on current research in the area of general security metrics. Study findings revealed that scorecards need to be customized for the healthcare setting in order for the security information to be useful and usable in healthcare organizations. The study findings resulted in the development of a security metrics scorecard that matches the healthcare security experts' information requirements.

  6. Threat driven modeling framework using petri nets for e-learning system.

    PubMed

    Khamparia, Aditya; Pandey, Babita

    2016-01-01

    Vulnerabilities at various levels are main cause of security risks in e-learning system. This paper presents a modified threat driven modeling framework, to identify the threats after risk assessment which requires mitigation and how to mitigate those threats. To model those threat mitigations aspects oriented stochastic petri nets are used. This paper included security metrics based on vulnerabilities present in e-learning system. The Common Vulnerability Scoring System designed to provide a normalized method for rating vulnerabilities which will be used as basis in metric definitions and calculations. A case study has been also proposed which shows the need and feasibility of using aspect oriented stochastic petri net models for threat modeling which improves reliability, consistency and robustness of the e-learning system.

  7. Index of cyber integrity

    NASA Astrophysics Data System (ADS)

    Anderson, Gustave

    2014-05-01

    Unfortunately, there is no metric, nor set of metrics, that are both general enough to encompass all possible types of applications yet specific enough to capture the application and attack specific details. As a result we are left with ad-hoc methods for generating evaluations of the security of our systems. Current state of the art methods for evaluating the security of systems include penetration testing and cyber evaluation tests. For these evaluations, security professionals simulate an attack from malicious outsiders and malicious insiders. These evaluations are very productive and are able to discover potential vulnerabilities resulting from improper system configuration, hardware and software flaws, or operational weaknesses. We therefore propose the index of cyber integrity (ICI), which is modeled after the index of biological integrity (IBI) to provide a holistic measure of the health of a system under test in a cyber-environment. The ICI provides a broad base measure through a collection of application and system specific metrics. In this paper, following the example of the IBI, we demonstrate how a multi-metric index may be used as a holistic measure of the health of a system under test in a cyber-environment.

  8. The Price of Uncertainty in Security Games

    NASA Astrophysics Data System (ADS)

    Grossklags, Jens; Johnson, Benjamin; Christin, Nicolas

    In the realm of information security, lack of information about other users' incentives in a network can lead to inefficient security choices and reductions in individuals' payoffs. We propose, contrast and compare three metrics for measuring the price of uncertainty due to the departure from the payoff-optimal security outcomes under complete information. Per the analogy with other efficiency metrics, such as the price of anarchy, we define the price of uncertainty as the maximum discrepancy in expected payoff in a complete information environment versus the payoff in an incomplete information environment. We consider difference, payoffratio, and cost-ratio metrics as canonical nontrivial measurements of the price of uncertainty. We conduct an algebraic, numerical, and graphical analysis of these metrics applied to different well-studied security scenarios proposed in prior work (i.e., best shot, weakest-link, and total effort). In these scenarios, we study how a fully rational expert agent could utilize the metrics to decide whether to gather information about the economic incentives of multiple nearsighted and naïve agents. We find substantial differences between the various metrics and evaluate the appropriateness for security choices in networked systems.

  9. Assessment of Performance Measures for Security of the Maritime Transportation Network, Port Security Metrics : Proposed Measurement of Deterrence Capability

    DOT National Transportation Integrated Search

    2007-01-03

    This report is the thirs in a series describing the development of performance measures pertaining to the security of the maritime transportation network (port security metrics). THe development of measures to guide improvements in maritime security ...

  10. Flexible Energy Scheduling Tool for Integrating Variable Generation | Grid

    Science.gov Websites

    , security-constrained economic dispatch, and automatic generation control programs. DOWNLOAD PAPER Electric commitment, security-constrained economic dispatch, and automatic generation control sub-models. Each sub resolutions and operating strategies can be explored. FESTIV produces not only economic metrics but also

  11. Analyzing Cases of Resilience Success and Failure - A Research Study

    DTIC Science & Technology

    2012-12-01

    controls [NIST 2012, NIST 2008] ISO 27002 and ISO 27004 Guidelines for initiating, implementing, maintaining, and improving information security...Commission ( ISO /IEC). Information technology—Security techniques—Code of practice for information security management ( ISO /IEC 27002 :2005). ISO /IEC, 2005...security management system and controls or groups of controls [ ISO /IEC 2005, ISO /IEC 2009] CIS Security Metrics Outcome and practice metrics measuring

  12. Predicting Catastrophic BGP Routing Instabilities

    DTIC Science & Technology

    2004-03-01

    predict a BGP routing instability confine their focus to either macro- or micro -level metrics, but not to both. The inherent limitations of each of...Level and Micro -Level Metrics Correlation; Worm Attack Studies; 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY... micro -level metrics, but not to both. The inherent limitations of each of these forms of metric gives rise to an excessive rate of spurious alerts

  13. What are we assessing when we measure food security? A compendium and review of current metrics.

    PubMed

    Jones, Andrew D; Ngure, Francis M; Pelto, Gretel; Young, Sera L

    2013-09-01

    The appropriate measurement of food security is critical for targeting food and economic aid; supporting early famine warning and global monitoring systems; evaluating nutrition, health, and development programs; and informing government policy across many sectors. This important work is complicated by the multiple approaches and tools for assessing food security. In response, we have prepared a compendium and review of food security assessment tools in which we review issues of terminology, measurement, and validation. We begin by describing the evolving definition of food security and use this discussion to frame a review of the current landscape of measurement tools available for assessing food security. We critically assess the purpose/s of these tools, the domains of food security assessed by each, the conceptualizations of food security that underpin each metric, as well as the approaches that have been used to validate these metrics. Specifically, we describe measurement tools that 1) provide national-level estimates of food security, 2) inform global monitoring and early warning systems, 3) assess household food access and acquisition, and 4) measure food consumption and utilization. After describing a number of outstanding measurement challenges that might be addressed in future research, we conclude by offering suggestions to guide the selection of appropriate food security metrics.

  14. What Are We Assessing When We Measure Food Security? A Compendium and Review of Current Metrics12

    PubMed Central

    Jones, Andrew D.; Ngure, Francis M.; Pelto, Gretel; Young, Sera L.

    2013-01-01

    The appropriate measurement of food security is critical for targeting food and economic aid; supporting early famine warning and global monitoring systems; evaluating nutrition, health, and development programs; and informing government policy across many sectors. This important work is complicated by the multiple approaches and tools for assessing food security. In response, we have prepared a compendium and review of food security assessment tools in which we review issues of terminology, measurement, and validation. We begin by describing the evolving definition of food security and use this discussion to frame a review of the current landscape of measurement tools available for assessing food security. We critically assess the purpose/s of these tools, the domains of food security assessed by each, the conceptualizations of food security that underpin each metric, as well as the approaches that have been used to validate these metrics. Specifically, we describe measurement tools that 1) provide national-level estimates of food security, 2) inform global monitoring and early warning systems, 3) assess household food access and acquisition, and 4) measure food consumption and utilization. After describing a number of outstanding measurement challenges that might be addressed in future research, we conclude by offering suggestions to guide the selection of appropriate food security metrics. PMID:24038241

  15. Bayesian performance metrics and small system integration in recent homeland security and defense applications

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Kostrzewski, Andrew; Patton, Edward; Pradhan, Ranjit; Shih, Min-Yi; Walter, Kevin; Savant, Gajendra; Shie, Rick; Forrester, Thomas

    2010-04-01

    In this paper, Bayesian inference is applied to performance metrics definition of the important class of recent Homeland Security and defense systems called binary sensors, including both (internal) system performance and (external) CONOPS. The medical analogy is used to define the PPV (Positive Predictive Value), the basic Bayesian metrics parameter of the binary sensors. Also, Small System Integration (SSI) is discussed in the context of recent Homeland Security and defense applications, emphasizing a highly multi-technological approach, within the broad range of clusters ("nexus") of electronics, optics, X-ray physics, γ-ray physics, and other disciplines.

  16. Continuous Security Metrics for Prevalent Network Threats: Introduction and First Four Metrics

    DTIC Science & Technology

    2012-05-22

    cyber at- tack. Recently, high -prole successful attacks have been detected against the International Mon- etary Fund, Citibank, Lockheed Martin, Google...RSA Security, Sony, and Oak Ridge National Laboratory[13]. These and other attacks have heightened securing networks as a high priority for many...of high -severity vulnerabilities found by network vulnerability scanners (e.g., [40]) and the numbers or percentages of hosts that are are not

  17. Security camera resolution measurements: Horizontal TV lines versus modulation transfer function measurements.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-01-01

    The horizontal television lines (HTVL) metric has been the primary quantity used by division 6000 related to camera resolution for high consequence security systems. This document shows HTVL measurements are fundamen- tally insufficient as a metric to determine camera resolution, and propose a quantitative, standards based methodology by measuring the camera system modulation transfer function (MTF), the most common and accepted metric of res- olution in the optical science community. Because HTVL calculations are easily misinterpreted or poorly defined, we present several scenarios in which HTVL is frequently reported, and discuss their problems. The MTF metric is discussed, and scenariosmore » are presented with calculations showing the application of such a metric.« less

  18. AgMIP 1.5°C Assessment: Mitigation and Adaptation at Coordinated Global and Regional Scales

    NASA Astrophysics Data System (ADS)

    Rosenzweig, C.

    2016-12-01

    The AgMIP 1.5°C Coordinated Global and Regional Integrated Assessments of Climate Change and Food Security (AgMIP 1.5 CGRA) is linking site-based crop and livestock models with similar models run on global grids, and then links these biophysical components with economics models and nutrition metrics at regional and global scales. The AgMIP 1.5 CGRA assessment brings together experts in climate, crop, livestock, economics, nutrition, and food security to define the 1.5°C Protocols and guide the process throughout the assessment. Scenarios are designed to consistently combine elements of intertwined storylines of future society including socioeconomic development (Shared Socioeconomic Pathways), greenhouse gas concentrations (Representative Concentration Pathways), and specific pathways of agricultural sector development (Representative Agricultural Pathways). Shared Climate Policy Assumptions will be extended to provide additional agricultural detail on mitigation and adaptation strategies. The multi-model, multi-disciplinary, multi-scale integrated assessment framework is using scenarios of economic development, adaptation, mitigation, food policy, and food security. These coordinated assessments are grounded in the expertise of AgMIP partners around the world, leading to more consistent results and messages for stakeholders, policymakers, and the scientific community. The early inclusion of nutrition and food security experts has helped to ensure that assessment outputs include important metrics upon which investment and policy decisions may be based. The CGRA builds upon existing AgMIP research groups (e.g., the AgMIP Wheat Team and the AgMIP Global Gridded Crop Modeling Initiative; GGCMI) and regional programs (e.g., AgMIP Regional Teams in Sub-Saharan Africa and South Asia), with new protocols for cross-scale and cross-disciplinary linkages to ensure the propagation of expert judgment and consistent assumptions.

  19. Centralized Cryptographic Key Management and Critical Risk Assessment - CRADA Final Report For CRADA Number NFE-11-03562

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, R. K.; Peters, Scott

    The Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) Cyber Security for Energy Delivery Systems (CSEDS) industry led program (DE-FOA-0000359) entitled "Innovation for Increasing Cyber Security for Energy Delivery Systems (12CSEDS)," awarded a contract to Sypris Electronics LLC to develop a Cryptographic Key Management System for the smart grid (Scalable Key Management Solutions for Critical Infrastructure Protection). Oak Ridge National Laboratory (ORNL) and Sypris Electronics, LLC as a result of that award entered into a CRADA (NFE-11-03562) between ORNL and Sypris Electronics, LLC. ORNL provided its Cyber Security Econometrics System (CSES) as a tool to be modifiedmore » and used as a metric to address risks and vulnerabilities in the management of cryptographic keys within the Advanced Metering Infrastructure (AMI) domain of the electric sector. ORNL concentrated our analysis on the AMI domain of which the National Electric Sector Cyber security Organization Resource (NESCOR) Working Group 1 (WG1) has documented 29 failure scenarios. The computational infrastructure of this metric involves system stakeholders, security requirements, system components and security threats. To compute this metric, we estimated the stakes that each stakeholder associates with each security requirement, as well as stochastic matrices that represent the probability of a threat to cause a component failure and the probability of a component failure to cause a security requirement violation. We applied this model to estimate the security of the AMI, by leveraging the recently established National Institute of Standards and Technology Interagency Report (NISTIR) 7628 guidelines for smart grid security and the International Electrotechnical Commission (IEC) 63351, Part 9 to identify the life cycle for cryptographic key management, resulting in a vector that assigned to each stakeholder an estimate of their average loss in terms of dollars per day of system operation. To further address probabilities of threats, information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain from NESCOR WG1. From these five selected scenarios, we characterized them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrated how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less

  20. Cryptographic Key Management and Critical Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K

    The Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) CyberSecurity for Energy Delivery Systems (CSEDS) industry led program (DE-FOA-0000359) entitled "Innovation for Increasing CyberSecurity for Energy Delivery Systems (12CSEDS)," awarded a contract to Sypris Electronics LLC to develop a Cryptographic Key Management System for the smart grid (Scalable Key Management Solutions for Critical Infrastructure Protection). Oak Ridge National Laboratory (ORNL) and Sypris Electronics, LLC as a result of that award entered into a CRADA (NFE-11-03562) between ORNL and Sypris Electronics, LLC. ORNL provided its Cyber Security Econometrics System (CSES) as a tool to be modified and usedmore » as a metric to address risks and vulnerabilities in the management of cryptographic keys within the Advanced Metering Infrastructure (AMI) domain of the electric sector. ORNL concentrated our analysis on the AMI domain of which the National Electric Sector Cyber security Organization Resource (NESCOR) Working Group 1 (WG1) has documented 29 failure scenarios. The computational infrastructure of this metric involves system stakeholders, security requirements, system components and security threats. To compute this metric, we estimated the stakes that each stakeholder associates with each security requirement, as well as stochastic matrices that represent the probability of a threat to cause a component failure and the probability of a component failure to cause a security requirement violation. We applied this model to estimate the security of the AMI, by leveraging the recently established National Institute of Standards and Technology Interagency Report (NISTIR) 7628 guidelines for smart grid security and the International Electrotechnical Commission (IEC) 63351, Part 9 to identify the life cycle for cryptographic key management, resulting in a vector that assigned to each stakeholder an estimate of their average loss in terms of dollars per day of system operation. To further address probabilities of threats, information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain from NESCOR WG1. From these five selected scenarios, we characterized them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrated how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less

  1. Predicting Attack-Prone Components with Source Code Static Analyzers

    DTIC Science & Technology

    2009-05-01

    models to determine if additional metrics are required to increase the accuracy of the model: non-security SCSA warnings, code churn and size, the count...code churn and size, the count of faults found manually during development, and the measure of coupling between components. The dependent variable...is the count of vulnerabilities reported by testing and those found in the field. We evaluated our model on three commercial telecommunications

  2. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  3. Bootstrapping Security Policies for Wearable Apps Using Attributed Structural Graphs.

    PubMed

    González-Tablas, Ana I; Tapiador, Juan E

    2016-05-11

    We address the problem of bootstrapping security and privacy policies for newly-deployed apps in wireless body area networks (WBAN) composed of smartphones, sensors and other wearable devices. We introduce a framework to model such a WBAN as an undirected graph whose vertices correspond to devices, apps and app resources, while edges model structural relationships among them. This graph is then augmented with attributes capturing the features of each entity together with user-defined tags. We then adapt available graph-based similarity metrics to find the closest app to a new one to be deployed, with the aim of reusing, and possibly adapting, its security policy. We illustrate our approach through a detailed smartphone ecosystem case study. Our results suggest that the scheme can provide users with a reasonably good policy that is consistent with the user's security preferences implicitly captured by policies already in place.

  4. Bootstrapping Security Policies for Wearable Apps Using Attributed Structural Graphs

    PubMed Central

    González-Tablas, Ana I.; Tapiador, Juan E.

    2016-01-01

    We address the problem of bootstrapping security and privacy policies for newly-deployed apps in wireless body area networks (WBAN) composed of smartphones, sensors and other wearable devices. We introduce a framework to model such a WBAN as an undirected graph whose vertices correspond to devices, apps and app resources, while edges model structural relationships among them. This graph is then augmented with attributes capturing the features of each entity together with user-defined tags. We then adapt available graph-based similarity metrics to find the closest app to a new one to be deployed, with the aim of reusing, and possibly adapting, its security policy. We illustrate our approach through a detailed smartphone ecosystem case study. Our results suggest that the scheme can provide users with a reasonably good policy that is consistent with the user’s security preferences implicitly captured by policies already in place. PMID:27187385

  5. Data fusion in cyber security: first order entity extraction from common cyber data

    NASA Astrophysics Data System (ADS)

    Giacobe, Nicklaus A.

    2012-06-01

    The Joint Directors of Labs Data Fusion Process Model (JDL Model) provides a framework for how to handle sensor data to develop higher levels of inference in a complex environment. Beginning from a call to leverage data fusion techniques in intrusion detection, there have been a number of advances in the use of data fusion algorithms in this subdomain of cyber security. While it is tempting to jump directly to situation-level or threat-level refinement (levels 2 and 3) for more exciting inferences, a proper fusion process starts with lower levels of fusion in order to provide a basis for the higher fusion levels. The process begins with first order entity extraction, or the identification of important entities represented in the sensor data stream. Current cyber security operational tools and their associated data are explored for potential exploitation, identifying the first order entities that exist in the data and the properties of these entities that are described by the data. Cyber events that are represented in the data stream are added to the first order entities as their properties. This work explores typical cyber security data and the inferences that can be made at the lower fusion levels (0 and 1) with simple metrics. Depending on the types of events that are expected by the analyst, these relatively simple metrics can provide insight on their own, or could be used in fusion algorithms as a basis for higher levels of inference.

  6. 20 CFR 435.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Metric system of measurement. 435.15 Section 435.15 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE REQUIREMENTS FOR... metric system is the preferred measurement system for U.S. trade and commerce. The Act requires each...

  7. Proceedings Second Annual Cyber Security and Information Infrastructure Research Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheldon, Frederick T; Krings, Axel; Yoo, Seong-Moo

    2006-01-01

    The workshop theme is Cyber Security: Beyond the Maginot Line Recently the FBI reported that computer crime has skyrocketed costing over $67 billion in 2005 alone and affecting 2.8M+ businesses and organizations. Attack sophistication is unprecedented along with availability of open source concomitant tools. Private, academic, and public sectors invest significant resources in cyber security. Industry primarily performs cyber security research as an investment in future products and services. While the public sector also funds cyber security R&D, the majority of this activity focuses on the specific mission(s) of the funding agency. Thus, broad areas of cyber security remain neglectedmore » or underdeveloped. Consequently, this workshop endeavors to explore issues involving cyber security and related technologies toward strengthening such areas and enabling the development of new tools and methods for securing our information infrastructure critical assets. We aim to assemble new ideas and proposals about robust models on which we can build the architecture of a secure cyberspace including but not limited to: * Knowledge discovery and management * Critical infrastructure protection * De-obfuscating tools for the validation and verification of tamper-proofed software * Computer network defense technologies * Scalable information assurance strategies * Assessment-driven design for trust * Security metrics and testing methodologies * Validation of security and survivability properties * Threat assessment and risk analysis * Early accurate detection of the insider threat * Security hardened sensor networks and ubiquitous computing environments * Mobile software authentication protocols * A new "model" of the threat to replace the "Maginot Line" model and more . . .« less

  8. Designing and Operating Through Compromise: Architectural Analysis of CKMS for the Advanced Metering Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duren, Mike; Aldridge, Hal; Abercrombie, Robert K

    2013-01-01

    Compromises attributable to the Advanced Persistent Threat (APT) highlight the necessity for constant vigilance. The APT provides a new perspective on security metrics (e.g., statistics based cyber security) and quantitative risk assessments. We consider design principals and models/tools that provide high assurance for energy delivery systems (EDS) operations regardless of the state of compromise. Cryptographic keys must be securely exchanged, then held and protected on either end of a communications link. This is challenging for a utility with numerous substations that must secure the intelligent electronic devices (IEDs) that may comprise complex control system of systems. For example, distribution andmore » management of keys among the millions of intelligent meters within the Advanced Metering Infrastructure (AMI) is being implemented as part of the National Smart Grid initiative. Without a means for a secure cryptographic key management system (CKMS) no cryptographic solution can be widely deployed to protect the EDS infrastructure from cyber-attack. We consider 1) how security modeling is applied to key management and cyber security concerns on a continuous basis from design through operation, 2) how trusted models and key management architectures greatly impact failure scenarios, and 3) how hardware-enabled trust is a critical element to detecting, surviving, and recovering from attack.« less

  9. Quantifying and measuring cyber resiliency

    NASA Astrophysics Data System (ADS)

    Cybenko, George

    2016-05-01

    Cyber resliency has become an increasingly attractive research and operational concept in cyber security. While several metrics have been proposed for quantifying cyber resiliency, a considerable gap remains between those metrics and operationally measurable and meaningful concepts that can be empirically determined in a scientific manner. This paper describes a concrete notion of cyber resiliency that can be tailored to meet specific needs of organizations that seek to introduce resiliency into their assessment of their cyber security posture.

  10. The impact of the topology on cascading failures in a power grid model

    NASA Astrophysics Data System (ADS)

    Koç, Yakup; Warnier, Martijn; Mieghem, Piet Van; Kooij, Robert E.; Brazier, Frances M. T.

    2014-05-01

    Cascading failures are one of the main reasons for large scale blackouts in power transmission grids. Secure electrical power supply requires, together with careful operation, a robust design of the electrical power grid topology. Currently, the impact of the topology on grid robustness is mainly assessed by purely topological approaches, that fail to capture the essence of electric power flow. This paper proposes a metric, the effective graph resistance, to relate the topology of a power grid to its robustness against cascading failures by deliberate attacks, while also taking the fundamental characteristics of the electric power grid into account such as power flow allocation according to Kirchhoff laws. Experimental verification on synthetic power systems shows that the proposed metric reflects the grid robustness accurately. The proposed metric is used to optimize a grid topology for a higher level of robustness. To demonstrate its applicability, the metric is applied on the IEEE 118 bus power system to improve its robustness against cascading failures.

  11. Integration of the SSPM and STAGE with the MPACT Virtual Facility Distributed Test Bed.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cipiti, Benjamin B.; Shoman, Nathan

    The Material Protection Accounting and Control Technologies (MPACT) program within DOE NE is working toward a 2020 milestone to demonstrate a Virtual Facility Distributed Test Bed. The goal of the Virtual Test Bed is to link all MPACT modeling tools, technology development, and experimental work to create a Safeguards and Security by Design capability for fuel cycle facilities. The Separation and Safeguards Performance Model (SSPM) forms the core safeguards analysis tool, and the Scenario Toolkit and Generation Environment (STAGE) code forms the core physical security tool. These models are used to design and analyze safeguards and security systems and generatemore » performance metrics. Work over the past year has focused on how these models will integrate with the other capabilities in the MPACT program and specific model changes to enable more streamlined integration in the future. This report describes the model changes and plans for how the models will be used more collaboratively. The Virtual Facility is not designed to integrate all capabilities into one master code, but rather to maintain stand-alone capabilities that communicate results between codes more effectively.« less

  12. Nonmanufacturing Businesses. U.S. Metric Study Interim Report.

    ERIC Educational Resources Information Center

    Cornog, June R.; Bunten, Elaine D.

    In this fifth interim report on the feasibility of a United States changeover to a metric system stems from the U.S. Metric Study, a primary stratified sample of 2,828 nonmanufacturing firms was randomly selected from 28,184 businesses taken from Social Security files, a secondary sample of 2,258 firms was randomly selected for replacement…

  13. Securing health sensing using integrated circuit metric.

    PubMed

    Tahir, Ruhma; Tahir, Hasan; McDonald-Maier, Klaus

    2015-10-20

    Convergence of technologies from several domains of computing and healthcare have aided in the creation of devices that can help health professionals in monitoring their patients remotely. An increase in networked healthcare devices has resulted in incidents related to data theft, medical identity theft and insurance fraud. In this paper, we discuss the design and implementation of a secure lightweight wearable health sensing system. The proposed system is based on an emerging security technology called Integrated Circuit Metric (ICMetric) that extracts the inherent features of a device to generate a unique device identification. In this paper, we provide details of how the physical characteristics of a health sensor can be used for the generation of hardware "fingerprints". The obtained fingerprints are used to deliver security services like authentication, confidentiality, secure admission and symmetric key generation. The generated symmetric key is used to securely communicate the health records and data of the patient. Based on experimental results and the security analysis of the proposed scheme, it is apparent that the proposed system enables high levels of security for health monitoring in resource optimized manner.

  14. Securing Health Sensing Using Integrated Circuit Metric

    PubMed Central

    Tahir, Ruhma; Tahir, Hasan; McDonald-Maier, Klaus

    2015-01-01

    Convergence of technologies from several domains of computing and healthcare have aided in the creation of devices that can help health professionals in monitoring their patients remotely. An increase in networked healthcare devices has resulted in incidents related to data theft, medical identity theft and insurance fraud. In this paper, we discuss the design and implementation of a secure lightweight wearable health sensing system. The proposed system is based on an emerging security technology called Integrated Circuit Metric (ICMetric) that extracts the inherent features of a device to generate a unique device identification. In this paper, we provide details of how the physical characteristics of a health sensor can be used for the generation of hardware “fingerprints”. The obtained fingerprints are used to deliver security services like authentication, confidentiality, secure admission and symmetric key generation. The generated symmetric key is used to securely communicate the health records and data of the patient. Based on experimental results and the security analysis of the proposed scheme, it is apparent that the proposed system enables high levels of security for health monitoring in resource optimized manner. PMID:26492250

  15. MODELING AND PERFORMANCE EVALUATION FOR AVIATION SECURITY CARGO INSPECTION QUEUING SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allgood, Glenn O; Olama, Mohammed M; Rose, Terri A

    Beginning in 2010, the U.S. will require that all cargo loaded in passenger aircraft be inspected. This will require more efficient processing of cargo and will have a significant impact on the inspection protocols and business practices of government agencies and the airlines. In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, and throughput. These metrics aremore » performance indicators of the system s ability to service current needs and response capacity to additional requests. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures will reduce the overall cost and shipping delays associated with the new inspection requirements.« less

  16. Strengthening DoD Cyber Security with the Vulnerability Market

    DTIC Science & Technology

    2014-01-01

    50,000 – $100,000 Windows $60,000 – $120,000 Firefox or Safari $60,000 – $150,000 Chrome or Internet Explorer $80,000 – $200,000 iOS $100,000...the CTB metric for the Google Chrome OS at $110,000. Accordingly, this metric could be used by Google to compare its security to other operating...Mozilla Foundation. (n.d.). Mozilla. Retrieved from https://www.mozilla. org/en-US/foundation/ Thomson, I. (2013, March 8). Pwn2Own: IE10, Firefox

  17. 15 CFR 714.2 - Annual reporting requirements for exports and imports in excess of 30 metric tons of Schedule 3...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS ACTIVITIES INVOLVING SCHEDULE 3 CHEMICALS § 714.2 Annual reporting requirements for exports and imports in excess of 30 metric tons of... exports and imports in excess of 30 metric tons of Schedule 3 chemicals. 714.2 Section 714.2 Commerce and...

  18. 15 CFR 714.2 - Annual reporting requirements for exports and imports in excess of 30 metric tons of Schedule 3...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS ACTIVITIES INVOLVING SCHEDULE 3 CHEMICALS § 714.2 Annual reporting requirements for exports and imports in excess of 30 metric tons of... exports and imports in excess of 30 metric tons of Schedule 3 chemicals. 714.2 Section 714.2 Commerce and...

  19. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ericson, Sean J; Alvarez, Paul

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  20. Leveraging multi-channel x-ray detector technology to improve quality metrics for industrial and security applications

    NASA Astrophysics Data System (ADS)

    Jimenez, Edward S.; Thompson, Kyle R.; Stohn, Adriana; Goodner, Ryan N.

    2017-09-01

    Sandia National Laboratories has recently developed the capability to acquire multi-channel radio- graphs for multiple research and development applications in industry and security. This capability allows for the acquisition of x-ray radiographs or sinogram data to be acquired at up to 300 keV with up to 128 channels per pixel. This work will investigate whether multiple quality metrics for computed tomography can actually benefit from binned projection data compared to traditionally acquired grayscale sinogram data. Features and metrics to be evaluated include the ability to dis- tinguish between two different materials with similar absorption properties, artifact reduction, and signal-to-noise for both raw data and reconstructed volumetric data. The impact of this technology to non-destructive evaluation, national security, and industry is wide-ranging and has to potential to improve upon many inspection methods such as dual-energy methods, material identification, object segmentation, and computer vision on radiographs.

  1. Wide-area, real-time monitoring and visualization system

    DOEpatents

    Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.

    2013-03-19

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  2. Wide-area, real-time monitoring and visualization system

    DOEpatents

    Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA

    2011-11-15

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  3. Real-time performance monitoring and management system

    DOEpatents

    Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA

    2007-06-19

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  4. Cyberspace security system

    DOEpatents

    Abercrombie, Robert K; Sheldon, Frederick T; Ferragut, Erik M

    2014-06-24

    A system evaluates reliability, performance and/or safety by automatically assessing the targeted system's requirements. A cost metric quantifies the impact of failures as a function of failure cost per unit of time. The metrics or measurements may render real-time (or near real-time) outcomes by initiating active response against one or more high ranked threats. The system may support or may be executed in many domains including physical domains, cyber security domains, cyber-physical domains, infrastructure domains, etc. or any other domains that are subject to a threat or a loss.

  5. Three tenets for secure cyber-physical system design and assessment

    NASA Astrophysics Data System (ADS)

    Hughes, Jeff; Cybenko, George

    2014-06-01

    This paper presents a threat-driven quantitative mathematical framework for secure cyber-physical system design and assessment. Called The Three Tenets, this originally empirical approach has been used by the US Air Force Research Laboratory (AFRL) for secure system research and development. The Tenets were first documented in 2005 as a teachable methodology. The Tenets are motivated by a system threat model that itself consists of three elements which must exist for successful attacks to occur: - system susceptibility; - threat accessibility and; - threat capability. The Three Tenets arise naturally by countering each threat element individually. Specifically, the tenets are: Tenet 1: Focus on What's Critical - systems should include only essential functions (to reduce susceptibility); Tenet 2: Move Key Assets Out-of-Band - make mission essential elements and security controls difficult for attackers to reach logically and physically (to reduce accessibility); Tenet 3: Detect, React, Adapt - confound the attacker by implementing sensing system elements with dynamic response technologies (to counteract the attackers' capabilities). As a design methodology, the Tenets mitigate reverse engineering and subsequent attacks on complex systems. Quantified by a Bayesian analysis and further justified by analytic properties of attack graph models, the Tenets suggest concrete cyber security metrics for system assessment.

  6. Dipole Models for UXO Discrimination at Live Sites

    DTIC Science & Technology

    2017-05-01

    Discriminator CCR Combined Classifier Ranking cm Centimeter(s) EM Electromagnetic EMI Electromagnetic Induction ESTCP Environmental Security Technology...fraction of the anomalies as arising from non-hazardous items that could be safely left in the ground. Of particular note, the contractor EM -61-MK2 cart...use of classification metrics applied to production quality EM - 61 data, it was possible to significantly reduce the number of clutter items excavated

  7. Optical nano artifact metrics using silicon random nanostructures

    NASA Astrophysics Data System (ADS)

    Matsumoto, Tsutomu; Yoshida, Naoki; Nishio, Shumpei; Hoga, Morihisa; Ohyagi, Yasuyuki; Tate, Naoya; Naruse, Makoto

    2016-08-01

    Nano-artifact metrics exploit unique physical attributes of nanostructured matter for authentication and clone resistance, which is vitally important in the age of Internet-of-Things where securing identities is critical. However, expensive and huge experimental apparatuses, such as scanning electron microscopy, have been required in the former studies. Herein, we demonstrate an optical approach to characterise the nanoscale-precision signatures of silicon random structures towards realising low-cost and high-value information security technology. Unique and versatile silicon nanostructures are generated via resist collapse phenomena, which contains dimensions that are well below the diffraction limit of light. We exploit the nanoscale precision ability of confocal laser microscopy in the height dimension; our experimental results demonstrate that the vertical precision of measurement is essential in satisfying the performances required for artifact metrics. Furthermore, by using state-of-the-art nanostructuring technology, we experimentally fabricate clones from the genuine devices. We demonstrate that the statistical properties of the genuine and clone devices are successfully exploited, showing that the liveness-detection-type approach, which is widely deployed in biometrics, is valid in artificially-constructed solid-state nanostructures. These findings pave the way for reasonable and yet sufficiently secure novel principles for information security based on silicon random nanostructures and optical technologies.

  8. A Dynamic Intrusion Detection System Based on Multivariate Hotelling's T2 Statistics Approach for Network Environments

    PubMed Central

    Avalappampatty Sivasamy, Aneetha; Sundan, Bose

    2015-01-01

    The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T2 method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T2 statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better. PMID:26357668

  9. A Dynamic Intrusion Detection System Based on Multivariate Hotelling's T2 Statistics Approach for Network Environments.

    PubMed

    Sivasamy, Aneetha Avalappampatty; Sundan, Bose

    2015-01-01

    The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T(2) method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T(2) statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better.

  10. Metrics for the National SCADA Test Bed Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craig, Philip A.; Mortensen, J.; Dagle, Jeffery E.

    2008-12-05

    The U.S. Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) National SCADA Test Bed (NSTB) Program is providing valuable inputs into the electric industry by performing topical research and development (R&D) to secure next generation and legacy control systems. In addition, the program conducts vulnerability and risk analysis, develops tools, and performs industry liaison, outreach and awareness activities. These activities will enhance the secure and reliable delivery of energy for the United States. This report will describe metrics that could be utilized to provide feedback to help enhance the effectiveness of the NSTB Program.

  11. Reconciliation of the cloud computing model with US federal electronic health record regulations

    PubMed Central

    2011-01-01

    Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing. PMID:21727204

  12. Reconciliation of the cloud computing model with US federal electronic health record regulations.

    PubMed

    Schweitzer, Eugene J

    2012-01-01

    Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing.

  13. Comparative Assessment of Physical and Social Determinants of Water Quantity and Water Quality Concerns

    NASA Astrophysics Data System (ADS)

    Gunda, T.; Hornberger, G. M.

    2017-12-01

    Concerns over water resources have evolved over time, from physical availability to economic access and recently, to a more comprehensive study of "water security," which is inherently interdisciplinary because a secure water system is influenced by and affects both physical and social components. The concept of water security carries connotations of both an adequate supply of water as well as water that meets certain quality standards. Although the term "water security" has many interpretations in the literature, the research field has not yet developed a synthetic analysis of water security as both a quantity (availability) and quality (contamination) issue. Using qualitative comparative and multi-regression analyses, we evaluate the primary physical and social factors influencing U.S. states' water security from a quantity perspective and from a quality perspective. Water system characteristics are collated from academic and government sources and include access/use, governance, and sociodemographic, and ecosystem metrics. Our analysis indicates differences in variables driving availability and contamination concerns; for example, climate is a more significant determinant in water quantity-based security analyses than in water quality-based security analyses. We will also discuss coevolution of system traits and the merits of constructing a robust water security index based on the relative importance of metrics from our analyses. These insights will improve understanding of the complex interactions between quantity and quality aspects and thus, overall security of water systems.

  14. Optimal resource allocation for defense of targets based on differing measures of attractiveness.

    PubMed

    Bier, Vicki M; Haphuriwat, Naraphorn; Menoyo, Jaime; Zimmerman, Rae; Culpen, Alison M

    2008-06-01

    This article describes the results of applying a rigorous computational model to the problem of the optimal defensive resource allocation among potential terrorist targets. In particular, our study explores how the optimal budget allocation depends on the cost effectiveness of security investments, the defender's valuations of the various targets, and the extent of the defender's uncertainty about the attacker's target valuations. We use expected property damage, expected fatalities, and two metrics of critical infrastructure (airports and bridges) as our measures of target attractiveness. Our results show that the cost effectiveness of security investment has a large impact on the optimal budget allocation. Also, different measures of target attractiveness yield different optimal budget allocations, emphasizing the importance of developing more realistic terrorist objective functions for use in budget allocation decisions for homeland security.

  15. Stochastic Multi-Timescale Power System Operations With Variable Wind Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hongyu; Krad, Ibrahim; Florita, Anthony

    This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less

  16. 2015 Annual Report on Security Clearance Determinations

    DTIC Science & Technology

    2016-06-28

    completed or pending security clearance determinations for government employees and contractors during the preceding fiscal year that have taken longer...each level during the preceding fiscal year. Similar data pertaining to USG contractors is also required. Also, for each element of the Intelligence...for USG Employees and USG Contractors Security Clearance Determination Processing Metrics for the Seven IC Agencies The number of individuals

  17. An analysis of security price risk and return among publicly traded pharmacy corporations.

    PubMed

    Gilligan, Adrienne M; Skrepnek, Grant H

    2013-01-01

    Community pharmacies have been subject to intense and increasing competition in the past several decades. To determine the security price risk and rate of return of publicly traded pharmacy corporations present on the major U.S. stock exchanges from 1930 to 2009. The Center of Research in Security Prices (CRSP) database was used to examine monthly security-level stock market prices in this observational retrospective study. The primary outcome of interest was the equity risk premium, with analyses focusing upon financial metrics associated with risk and return based upon modern portfolio theory (MPT) including: abnormal returns (i.e., alpha), volatility (i.e., beta), and percentage of returns explained (i.e., adjusted R(2)). Three equilibrium models were estimated using random-effects generalized least squares (GLS): 1) the Capital Asset Pricing Model (CAPM); 2) Fama-French Three-Factor Model; and 3) Carhart Four-Factor Model. Seventy-five companies were examined from 1930 to 2009, with overall adjusted R(2) values ranging from 0.13 with the CAPM to 0.16 with the Four-Factor model. Alpha was not significant within any of the equilibrium models across the entire 80-year time period, though was found from 1999 to 2009 in the Three- and Four-Factor models to be associated with a large, significant, and negative risk-adjusted abnormal returns of -33.84%. Volatility varied across specific time periods based upon the financial model employed. This investigation of risk and return within publicly listed pharmacy corporations from 1930 to 2009 found that substantial losses were incurred particularly from 1999 to 2009, with risk-adjusted security valuations decreasing by one-third. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Achieving sustainable irrigation water withdrawals: global impacts on food security and land use

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Hertel, Thomas W.; Lammers, Richard B.; Prusevich, Alexander; Baldos, Uris Lantz C.; Grogan, Danielle S.; Frolking, Steve

    2017-10-01

    Unsustainable water use challenges the capacity of water resources to ensure food security and continued growth of the economy. Adaptation policies targeting future water security can easily overlook its interaction with other sustainability metrics and unanticipated local responses to the larger-scale policy interventions. Using a global partial equilibrium grid-resolving model SIMPLE-G, and coupling it with the global Water Balance Model, we simulate the consequences of reducing unsustainable irrigation for food security, land use change, and terrestrial carbon. A variety of future (2050) scenarios are considered that interact irrigation productivity with two policy interventions— inter-basin water transfers and international commodity market integration. We find that pursuing sustainable irrigation may erode other development and environmental goals due to higher food prices and cropland expansion. This results in over 800 000 more undernourished people and 0.87 GtC additional emissions. Faster total factor productivity growth in irrigated sectors will encourage more aggressive irrigation water use in the basins where irrigation vulnerability is expected to be reduced by inter-basin water transfer. By allowing for a systematic comparison of these alternative adaptations to future irrigation vulnerability, the global gridded modeling approach offers unique insights into the multiscale nature of the water scarcity challenge.

  19. Peru Water Resources: Integrating NASA Earth Observations into Water Resource Planning and Management in Perus La Libertad Region

    NASA Technical Reports Server (NTRS)

    Padgett-Vasquez, Steve; Steentofte, Catherine; Holbrook, Abigail

    2014-01-01

    Developing countries often struggle with providing water security and sanitation services to their populations. An important aspect of improving security and sanitation is developing a comprehensive understanding of the country's water budget. Water For People, a non-profit organization dedicated to providing clean drinking water, is working with the Peruvian government to develop a water budget for the La Libertad region of Peru which includes the creation of an extensive watershed management plan. Currently, the data archive of the necessary variables to create the water management plan is extremely limited. Implementing NASA Earth observations has bolstered the dataset being used by Water For People, and the METRIC (Mapping EvapoTranspiration at High Resolution and Internalized Calibration) model has allowed for the estimation of the evapotranspiration values for the region. Landsat 8 imagery and the DEM (Digital Elevation Model) from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) sensor onboard Terra were used to derive the land cover information, and were used in conjunction with local weather data of Cascas from Peru's National Meteorological and Hydrological Service (SENAMHI). Python was used to combine input variables and METRIC model calculations to approximate the evapotranspiration values for the Ochape sub-basin of the Chicama River watershed. Once calculated, the evapotranspiration values and methodology were shared Water For People to help supplement their decision support tools in the La Libertad region of Peru and potentially apply the methodology in other areas of need.

  20. Bayesian performance metrics of binary sensors in homeland security applications

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz P.; Forrester, Thomas C.

    2008-04-01

    Bayesian performance metrics, based on such parameters, as: prior probability, probability of detection (or, accuracy), false alarm rate, and positive predictive value, characterizes the performance of binary sensors; i.e., sensors that have only binary response: true target/false target. Such binary sensors, very common in Homeland Security, produce an alarm that can be true, or false. They include: X-ray airport inspection, IED inspections, product quality control, cancer medical diagnosis, part of ATR, and many others. In this paper, we analyze direct and inverse conditional probabilities in the context of Bayesian inference and binary sensors, using X-ray luggage inspection statistical results as a guideline.

  1. Physical-layer security analysis of a quantum-noise randomized cipher based on the wire-tap channel model.

    PubMed

    Jiao, Haisong; Pu, Tao; Zheng, Jilin; Xiang, Peng; Fang, Tao

    2017-05-15

    The physical-layer security of a quantum-noise randomized cipher (QNRC) system is, for the first time, quantitatively evaluated with secrecy capacity employed as the performance metric. Considering quantum noise as a channel advantage for legitimate parties over eavesdroppers, the specific wire-tap models for both channels of the key and data are built with channel outputs yielded by quantum heterodyne measurement; the general expressions of secrecy capacities for both channels are derived, where the matching codes are proved to be uniformly distributed. The maximal achievable secrecy rate of the system is proposed, under which secrecy of both the key and data is guaranteed. The influences of various system parameters on secrecy capacities are assessed in detail. The results indicate that QNRC combined with proper channel codes is a promising framework of secure communication for long distance with high speed, which can be orders of magnitude higher than the perfect secrecy rates of other encryption systems. Even if the eavesdropper intercepts more signal power than the legitimate receiver, secure communication (up to Gb/s) can still be achievable. Moreover, the secrecy of running key is found to be the main constraint to the systemic maximal secrecy rate.

  2. Model Classes, Approximation, and Metrics for Dynamic Processing of Urban Terrain Data

    DTIC Science & Technology

    2013-01-01

    Sensing,” DARPA IPTO Retreat, Annapolis, 2008. R. Baraniuk, “Compressive Sensing, Wavelets, and Sparsity,” SPIE Defense + Security (acceptance speech ... Speech and Signal Processing (ICASSP). 2011/05/22 00:00:00, Prague, Czech Republic. : , 08/31/2011 33.00 Sang-Mook Lee, Jeong Joon Im, Bo-Hee Lee... KNN ) points to define a local intrinsic coordinate system using PCA and to construct the manifold and function locally using least squares. Local

  3. Influence of three common calibration metrics on the diagnosis of climate change impacts on water resources

    NASA Astrophysics Data System (ADS)

    Seiller, G.; Roy, R.; Anctil, F.

    2017-04-01

    Uncertainties associated to the evaluation of the impacts of climate change on water resources are broad, from multiple sources, and lead to diagnoses sometimes difficult to interpret. Quantification of these uncertainties is a key element to yield confidence in the analyses and to provide water managers with valuable information. This work specifically evaluates the influence of hydrological modeling calibration metrics on future water resources projections, on thirty-seven watersheds in the Province of Québec, Canada. Twelve lumped hydrologic models, representing a wide range of operational options, are calibrated with three common objective functions derived from the Nash-Sutcliffe efficiency. The hydrologic models are forced with climate simulations corresponding to two RCP, twenty-nine GCM from CMIP5 (Coupled Model Intercomparison Project phase 5) and two post-treatment techniques, leading to future projections in the 2041-2070 period. Results show that the diagnosis of the impacts of climate change on water resources are quite affected by the hydrologic models selection and calibration metrics. Indeed, for the four selected hydrological indicators, dedicated to water management, parameters from the three objective functions can provide different interpretations in terms of absolute and relative changes, as well as projected changes direction and climatic ensemble consensus. The GR4J model and a multimodel approach offer the best modeling options, based on calibration performance and robustness. Overall, these results illustrate the need to provide water managers with detailed information on relative changes analysis, but also absolute change values, especially for hydrological indicators acting as security policy thresholds.

  4. Adapting an Agent-Based Model of Socio-Technical Systems to Analyze System and Security Failures

    DTIC Science & Technology

    2016-05-09

    statistically significant amount, which it did with a p-valueɘ.0003 on a simulation of 3125 iterations; the data is shown in the Delegation 1 column of...Blackout metric to a statistically significant amount, with a p-valueɘ.0003 on a simulation of 3125 iterations; the data is shown in the Delegation 2...Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1, pp. 1007- 1014 . International Foundation

  5. Biometric template transformation: a security analysis

    NASA Astrophysics Data System (ADS)

    Nagar, Abhishek; Nandakumar, Karthik; Jain, Anil K.

    2010-01-01

    One of the critical steps in designing a secure biometric system is protecting the templates of the users that are stored either in a central database or on smart cards. If a biometric template is compromised, it leads to serious security and privacy threats because unlike passwords, it is not possible for a legitimate user to revoke his biometric identifiers and switch to another set of uncompromised identifiers. One methodology for biometric template protection is the template transformation approach, where the template, consisting of the features extracted from the biometric trait, is transformed using parameters derived from a user specific password or key. Only the transformed template is stored and matching is performed directly in the transformed domain. In this paper, we formally investigate the security strength of template transformation techniques and define six metrics that facilitate a holistic security evaluation. Furthermore, we analyze the security of two wellknown template transformation techniques, namely, Biohashing and cancelable fingerprint templates based on the proposed metrics. Our analysis indicates that both these schemes are vulnerable to intrusion and linkage attacks because it is relatively easy to obtain either a close approximation of the original template (Biohashing) or a pre-image of the transformed template (cancelable fingerprints). We argue that the security strength of template transformation techniques must consider also consider the computational complexity of obtaining a complete pre-image of the transformed template in addition to the complexity of recovering the original biometric template.

  6. Real time biometric surveillance with gait recognition

    NASA Astrophysics Data System (ADS)

    Mohapatra, Subasish; Swain, Anisha; Das, Manaswini; Mohanty, Subhadarshini

    2018-04-01

    Bio metric surveillance has become indispensable for every system in the recent years. The contribution of bio metric authentication, identification, and screening purposes are widely used in various domains for preventing unauthorized access. A large amount of data needs to be updated, segregated and safeguarded from malicious software and misuse. Bio metrics is the intrinsic characteristics of each individual. Recently fingerprints, iris, passwords, unique keys, and cards are commonly used for authentication purposes. These methods have various issues related to security and confidentiality. These systems are not yet automated to provide the safety and security. The gait recognition system is the alternative for overcoming the drawbacks of the recent bio metric based authentication systems. Gait recognition is newer as it hasn't been implemented in the real-world scenario so far. This is an un-intrusive system that requires no knowledge or co-operation of the subject. Gait is a unique behavioral characteristic of every human being which is hard to imitate. The walking style of an individual teamed with the orientation of joints in the skeletal structure and inclinations between them imparts the unique characteristic. A person can alter one's own external appearance but not skeletal structure. These are real-time, automatic systems that can even process low-resolution images and video frames. In this paper, we have proposed a gait recognition system and compared the performance with conventional bio metric identification systems.

  7. Smart Grid Status and Metrics Report Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balducci, Patrick J.; Antonopoulos, Chrissi A.; Clements, Samuel L.

    A smart grid uses digital power control and communication technology to improve the reliability, security, flexibility, and efficiency of the electric system, from large generation through the delivery systems to electricity consumers and a growing number of distributed generation and storage resources. To convey progress made in achieving the vision of a smart grid, this report uses a set of six characteristics derived from the National Energy Technology Laboratory Modern Grid Strategy. The Smart Grid Status and Metrics Report defines and examines 21 metrics that collectively provide insight into the grid’s capacity to embody these characteristics. This appendix presents papersmore » covering each of the 21 metrics identified in Section 2.1 of the Smart Grid Status and Metrics Report. These metric papers were prepared in advance of the main body of the report and collectively form its informational backbone.« less

  8. Capacity utilization study for aviation security cargo inspection queuing system

    NASA Astrophysics Data System (ADS)

    Allgood, Glenn O.; Olama, Mohammed M.; Lake, Joe E.; Brumback, Daryl

    2010-04-01

    In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The queuing model employed in our study is based on discrete-event simulation and processes various types of cargo simultaneously. Onsite measurements are collected in an airport facility to validate the queuing model. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, throughput, capacity utilization, subscribed capacity utilization, resources capacity utilization, subscribed resources capacity utilization, and number of cargo pieces (or pallets) in the different queues. These metrics are performance indicators of the system's ability to service current needs and response capacity to additional requests. We studied and analyzed different scenarios by changing various model parameters such as number of pieces per pallet, number of TSA inspectors and ATS personnel, number of forklifts, number of explosives trace detection (ETD) and explosives detection system (EDS) inspection machines, inspection modality distribution, alarm rate, and cargo closeout time. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures should reduce the overall cost and shipping delays associated with new inspection requirements.

  9. Capacity Utilization Study for Aviation Security Cargo Inspection Queuing System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allgood, Glenn O; Olama, Mohammed M; Lake, Joe E

    In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The queuing model employed in our study is based on discrete-event simulation and processes various types of cargo simultaneously. Onsite measurements are collected in an airport facility to validate the queuing model. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, throughput, capacity utilization, subscribed capacity utilization, resources capacity utilization, subscribed resources capacity utilization, and number ofmore » cargo pieces (or pallets) in the different queues. These metrics are performance indicators of the system s ability to service current needs and response capacity to additional requests. We studied and analyzed different scenarios by changing various model parameters such as number of pieces per pallet, number of TSA inspectors and ATS personnel, number of forklifts, number of explosives trace detection (ETD) and explosives detection system (EDS) inspection machines, inspection modality distribution, alarm rate, and cargo closeout time. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures should reduce the overall cost and shipping delays associated with new inspection requirements.« less

  10. Evaluating Security Controls Based on Key Performance Indicators and Stakeholder Mission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheldon, Frederick T; Abercrombie, Robert K; Mili, Ali

    2008-01-01

    Good security metrics are required to make good decisions about how to design security countermeasures, to choose between alternative security architectures, and to improve security during operations. Therefore, in essence, measurement can be viewed as a decision aid. The lack of sound practical security metrics is severely hampering progress in the development of secure systems. The Cyberspace Security Econometrics System (CSES) offers the following advantages over traditional measurement systems: (1) CSES reflects the variances that exist amongst different stakeholders of the same system. Different stakeholders will typically attach different stakes to the same requirement or service (e.g., a service maymore » be provided by an information technology system or process control system, etc.). (2) For a given stakeholder, CSES reflects the variance that may exist among the stakes she/he attaches to meeting each requirement. The same stakeholder may attach different stakes to satisfying different requirements within the overall system specification. (3) For a given compound specification (e.g., combination(s) of commercial off the shelf software and/or hardware), CSES reflects the variance that may exist amongst the levels of verification and validation (i.e., certification) performed on components of the specification. The certification activity may produce higher levels of assurance across different components of the specification than others. Consequently, this paper introduces the basis, objectives and capabilities for the CSES including inputs/outputs and the basic structural and mathematical underpinnings.« less

  11. Defense Technology Security Administration Strategic Plan 2009-2010

    DTIC Science & Technology

    2008-12-22

    NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Defense Technology Security Administration ( DTSA ),Washington,DC 8. PERFORMING ORGANIZATION...Security Administration This document is unclassifi ed in its entirety. Photography courtesy of Defense Link and DTSA . Document printed 2009. DTSA ...STRATEGIC PLAN 2009-2010 C O N T E N T S Message from the Director 2 Envisioning 2010 3 Our Way Ahead 5 We Are DTSA 18 Metrics Matrix 24 DTSA

  12. Exploring the association of urban or rural county status and environmental, nutrition- and lifestyle-related resources with the efficacy of SNAP-Ed (Supplemental Nutrition Assistance Program-Education) to improve food security.

    PubMed

    Rivera, Rebecca L; Dunne, Jennifer; Maulding, Melissa K; Wang, Qi; Savaiano, Dennis A; Nickols-Richardson, Sharon M; Eicher-Miller, Heather A

    2018-04-01

    To investigate the association of policy, systems and environmental factors with improvement in household food security among low-income Indiana households with children after a Supplemental Nutrition Assistance Program-Education (SNAP-Ed) direct nutrition education intervention. Household food security scores measured by the eighteen-item US Household Food Security Survey Module in a longitudinal randomized and controlled SNAP-Ed intervention study conducted from August 2013 to April 2015 were the response variable. Metrics to quantify environmental factors including classification of urban or rural county status; the number of SNAP-authorized stores, food pantries and recreational facilities; average fair market housing rental price; and natural amenity rank were collected from government websites and data sets covering the years 2012-2016 and used as covariates in mixed multiple linear regression modelling. Thirty-seven Indiana counties, USA, 2012-2016. SNAP-Ed eligible adults from households with children (n 328). None of the environmental factors investigated were significantly associated with changes in household food security in this exploratory study. SNAP-Ed improves food security regardless of urban or rural location or the environmental factors investigated. Expansion of SNAP-Ed in rural areas may support food access among the low-income population and reduce the prevalence of food insecurity in rural compared with urban areas. Further investigation into policy, systems and environmental factors of the Social Ecological Model are warranted to better understand their relationship with direct SNAP-Ed and their impact on diet-related behaviours and food security.

  13. A Graph Analytic Metric for Mitigating Advanced Persistent Threat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, John R.; Hogan, Emilie A.

    2013-06-04

    This paper introduces a novel graph analytic metric that can be used to measure the potential vulnerability of a cyber network to specific types of attacks that use lateral movement and privilege escalation such as the well known Pass The Hash, (PTH). The metric is computed from an oriented subgraph of the underlying cyber network induced by selecting only those edges for which a given property holds between the two vertices of the edge. The metric with respect to a select node on the subgraph is defined as the likelihood that the select node is reachable from another arbitrary nodemore » in the graph. This metric can be calculated dynamically from the authorization and auditing layers during the network security authorization phase and will potentially enable predictive deterrence against attacks such as PTH.« less

  14. Quantifying Availability in SCADA Environments Using the Cyber Security Metric MFC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aissa, Anis Ben; Rabai, Latifa Ben Arfa; Abercrombie, Robert K

    2014-01-01

    Supervisory Control and Data Acquisition (SCADA) systems are distributed networks dispersed over large geographic areas that aim to monitor and control industrial processes from remote areas and/or a centralized location. They are used in the management of critical infrastructures such as electric power generation, transmission and distribution, water and sewage, manufacturing/industrial manufacturing as well as oil and gas production. The availability of SCADA systems is tantamount to assuring safety, security and profitability. SCADA systems are the backbone of the national cyber-physical critical infrastructure. Herein, we explore the definition and quantification of an econometric measure of availability, as it applies tomore » SCADA systems; our metric is a specialization of the generic measure of mean failure cost.« less

  15. Anticipating the unintended consequences of security dynamics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Overfelt, James Robert; Malczynski, Leonard A.

    2010-01-01

    In a globalized world, dramatic changes within any one nation causes ripple or even tsunamic effects within neighbor nations and nations geographically far removed. Multinational interventions to prevent or mitigate detrimental changes can easily cause secondary unintended consequences more detrimental and enduring than the feared change instigating the intervention. This LDRD research developed the foundations for a flexible geopolitical and socioeconomic simulation capability that focuses on the dynamic national security implications of natural and man-made trauma for a nation-state and the states linked to it through trade or treaty. The model developed contains a database for simulating all 229 recognizedmore » nation-states and sovereignties with the detail of 30 economic sectors including consumers and natural resources. The model explicitly simulates the interactions among the countries and their governments. Decisions among governments and populations is based on expectation formation. In the simulation model, failed expectations are used as a key metric for tension across states, among ethnic groups, and between population factions. This document provides the foundational documentation for the model.« less

  16. Information Operations & Security

    DTIC Science & Technology

    2012-03-05

    Fred B. Schneider, Cornell The Promise of Security Metrics • Users: Purchasing decisions – Which system is the better value? • Builders ...Engineering University of Maryland, College Park DISTRIBUTION A: Approved for public release; distribution is unlimited. Digital Multimedia Anti...fingerprints for multimedia content: • Determine the time and place of recordings • Detect tampering in the multimedia content; bind video and

  17. Distinguishability of quantum states and shannon complexity in quantum cryptography

    NASA Astrophysics Data System (ADS)

    Arbekov, I. M.; Molotkov, S. N.

    2017-07-01

    The proof of the security of quantum key distribution is a rather complex problem. Security is defined in terms different from the requirements imposed on keys in classical cryptography. In quantum cryptography, the security of keys is expressed in terms of the closeness of the quantum state of an eavesdropper after key distribution to an ideal quantum state that is uncorrelated to the key of legitimate users. A metric of closeness between two quantum states is given by the trace metric. In classical cryptography, the security of keys is understood in terms of, say, the complexity of key search in the presence of side information. In quantum cryptography, side information for the eavesdropper is given by the whole volume of information on keys obtained from both quantum and classical channels. The fact that the mathematical apparatuses used in the proof of key security in classical and quantum cryptography are essentially different leads to misunderstanding and emotional discussions [1]. Therefore, one should be able to answer the question of how different cryptographic robustness criteria are related to each other. In the present study, it is shown that there is a direct relationship between the security criterion in quantum cryptography, which is based on the trace distance determining the distinguishability of quantum states, and the criterion in classical cryptography, which uses guesswork on the determination of a key in the presence of side information.

  18. Indicators and Methods for Evaluating Economic, Ecosystem ...

    EPA Pesticide Factsheets

    The U.S. Human Well-being Index (HWBI) is a composite measure that incorporates economic, environmental, and societal well-being elements through the eight domains of connection to nature, cultural fulfillment, education, health, leisure time, living standards, safety and security, and social cohesion (USEPA 2012a; Smith et al. 2013). Twenty-eight services, represented by a collection of indicators and metrics, have been identified as influencing these domains of human well-being. By taking an inventory of stocks or measuring the results of a service, a relationship function can be derived to understand how changes in the provisioning of that service can influence the HWBI. An extensive review of existing services was performed to identify current services, indicators and metrics in use. This report describes the indicators and methods we have selected to evaluate the provisioning of economic, ecosystem, and social services related to human well-being. Provide metadata and methods for calculating services provisioning scores for HWBI modeling framework

  19. Determining the multi-scale hedge ratios of stock index futures using the lower partial moments method

    NASA Astrophysics Data System (ADS)

    Dai, Jun; Zhou, Haigang; Zhao, Shaoquan

    2017-01-01

    This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.

  20. Risk-Based Prioritization of Research for Aviation Security Using Logic-Evolved Decision Analysis

    NASA Technical Reports Server (NTRS)

    Eisenhawer, S. W.; Bott, T. F.; Sorokach, M. R.; Jones, F. P.; Foggia, J. R.

    2004-01-01

    The National Aeronautics and Space Administration is developing advanced technologies to reduce terrorist risk for the air transportation system. Decision support tools are needed to help allocate assets to the most promising research. An approach to rank ordering technologies (using logic-evolved decision analysis), with risk reduction as the metric, is presented. The development of a spanning set of scenarios using a logic-gate tree is described. Baseline risk for these scenarios is evaluated with an approximate reasoning model. Illustrative risk and risk reduction results are presented.

  1. Intersubject variability and intrasubject reproducibility of 12-lead ECG metrics: Implications for human verification.

    PubMed

    Jekova, Irena; Krasteva, Vessela; Leber, Remo; Schmid, Ramun; Twerenbold, Raphael; Müller, Christian; Reichlin, Tobias; Abächerli, Roger

    Electrocardiogram (ECG) biometrics is an advanced technology, not yet covered by guidelines on criteria, features and leads for maximal authentication accuracy. This study aims to define the minimal set of morphological metrics in 12-lead ECG by optimization towards high reliability and security, and validation in a person verification model across a large population. A standard 12-lead resting ECG database from 574 non-cardiac patients with two remote recordings (>1year apart) was used. A commercial ECG analysis module (Schiller AG) measured 202 morphological features, including lead-specific amplitudes, durations, ST-metrics, and axes. Coefficient of variation (CV, intersubject variability) and percent-mean-absolute-difference (PMAD, intrasubject reproducibility) defined the optimization (PMAD/CV→min) and restriction (CV<30%) criteria for selection of the most stable and distinctive features. Linear discriminant analysis (LDA) validated the non-redundant feature set for person verification. Maximal LDA verification sensitivity (85.3%) and specificity (86.4%) were validated for 11 optimal features: R-amplitude (I,II,V1,V2,V3,V5), S-amplitude (V1,V2), Tnegative-amplitude (aVR), and R-duration (aVF,V1). Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Day, night and all-weather security surveillance automation synergy from combining two powerful technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morellas, Vassilios; Johnson, Andrew; Johnston, Chris

    2006-07-01

    Thermal imaging is rightfully a real-world technology proven to bring confidence to daytime, night-time and all weather security surveillance. Automatic image processing intrusion detection algorithms are also a real world technology proven to bring confidence to system surveillance security solutions. Together, day, night and all weather video imagery sensors and automated intrusion detection software systems create the real power to protect early against crime, providing real-time global homeland protection, rather than simply being able to monitor and record activities for post event analysis. These solutions, whether providing automatic security system surveillance at airports (to automatically detect unauthorized aircraft takeoff andmore » landing activities) or at high risk private, public or government facilities (to automatically detect unauthorized people or vehicle intrusion activities) are on the move to provide end users the power to protect people, capital equipment and intellectual property against acts of vandalism and terrorism. As with any technology, infrared sensors and automatic image intrusion detection systems for global homeland security protection have clear technological strengths and limitations compared to other more common day and night vision technologies or more traditional manual man-in-the-loop intrusion detection security systems. This paper addresses these strength and limitation capabilities. False Alarm (FAR) and False Positive Rate (FPR) is an example of some of the key customer system acceptability metrics and Noise Equivalent Temperature Difference (NETD) and Minimum Resolvable Temperature are examples of some of the sensor level performance acceptability metrics. (authors)« less

  3. Target Scattering Metrics: Model-Model and Model-Data Comparisons

    DTIC Science & Technology

    2017-12-13

    measured synthetic aperture sonar (SAS) data or from numerical models is investigated. Metrics are needed for quantitative comparisons for signals...candidate metrics for model-model comparisons are examined here with a goal to consider raw data prior to its reduction to data products, which may...be suitable for input to classification schemes. The investigated metrics are then applied to model-data comparisons. INTRODUCTION Metrics for

  4. Target Scattering Metrics: Model-Model and Model Data comparisons

    DTIC Science & Technology

    2017-12-13

    measured synthetic aperture sonar (SAS) data or from numerical models is investigated. Metrics are needed for quantitative comparisons for signals...candidate metrics for model-model comparisons are examined here with a goal to consider raw data prior to its reduction to data products, which may...be suitable for input to classification schemes. The investigated metrics are then applied to model-data comparisons. INTRODUCTION Metrics for

  5. Near-Real-Time Cloud Auditing for Rapid Response

    DTIC Science & Technology

    2013-10-01

    cloud auditing , which provides timely evaluation results and rapid response, is the key to assuring the cloud. In this paper, we discuss security and...providers with possible automation of the audit , assertion, assessment, and assurance of their services. The Cloud Security Alliance (CSA [15]) was formed...monitoring tools, research literature, standards, and other resources related to IA (Information Assurance ) metrics and IT auditing . In the following

  6. Security Vulnerability and Patch Management in Electric Utilities: A Data-Driven Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Qinghua; Zhang, Fengli

    This paper explores a real security vulnerability and patch management dataset from an electric utility in order to shed light on characteristics of the vulnerabilities that electric utility assets have and how they are remediated in practice. Specifically, it first analyzes the distribution of vulnerabilities over software, assets, and other metric. Then it analyzes how vulnerability features affect remediate actions.

  7. 77 FR 39559 - In the Matter of AngelCiti Entertainment, Inc., BodyTel Scientific, Inc., Clearant, Inc...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-03

    ...., BodyTel Scientific, Inc., Clearant, Inc., DataMetrics Corp., and Green Energy Group, Inc. (a/k/a eCom eCom.Com, Inc.); Order of Suspension of Trading June 29, 2012. It appears to the Securities and... of current and accurate information concerning the securities of Green Energy Group, Inc. (a/k/a eCom...

  8. DLA Energy Biofuel Feedstock Metrics Study

    DTIC Science & Technology

    2012-12-11

    mission is to “provide the Depart- ment of Defense [DoD] and other government agencies with comprehensive ener- gy solutions in the most effective and...strategic imperative to consider energy security and climate change because of their potential effect on national security and mission readiness.8 For...mobility fuel costs have an adverse effect on military service programs and capa- bilities, particularly when combined with a tough appropriations

  9. Incident Management Capability Metrics Version 0.1

    DTIC Science & Technology

    2007-04-01

    guidance for proactively securing and hardening their infrastructure, for example6: • ISO 17799 ( ISO /IEC 17799:2005) [ ISO 2005a] • ISO 27001 ( ISO /IEC... 27001 :2005) [ ISO 2005b] • Control Objectives for Information and related Technology (COBIT) [ITGI 2006...security management systems – Requirements ( ISO /IEC 27001 :2005).http://www.iso.org/ iso /en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=421 03 (2005

  10. Physical-layer security analysis of PSK quantum-noise randomized cipher in optically amplified links

    NASA Astrophysics Data System (ADS)

    Jiao, Haisong; Pu, Tao; Xiang, Peng; Zheng, Jilin; Fang, Tao; Zhu, Huatao

    2017-08-01

    The quantitative security of quantum-noise randomized cipher (QNRC) in optically amplified links is analyzed from the perspective of physical-layer advantage. Establishing the wire-tap channel models for both key and data, we derive the general expressions of secrecy capacities for the key against ciphertext-only attack and known-plaintext attack, and that for the data, which serve as the basic performance metrics. Further, the maximal achievable secrecy rate of the system is proposed, under which secrecy of both the key and data is guaranteed. Based on the same framework, the secrecy capacities of various cases can be assessed and compared. The results indicate perfect secrecy is potentially achievable for data transmission, and an elementary principle of setting proper number of photons and bases is given to ensure the maximal data secrecy capacity. But the key security is asymptotically perfect, which tends to be the main constraint of systemic maximal secrecy rate. Moreover, by adopting cascaded optical amplification, QNRC can realize long-haul transmission with secure rate up to Gb/s, which is orders of magnitude higher than the perfect secrecy rates of other encryption systems.

  11. A New Nonmonetary Metric for Indicating Environmental Benefits from Ecosystem Restoration Projects of the U.S. Army Corps of Engineers, Report 2

    DTIC Science & Technology

    2010-07-01

    Research and Development Center (ERDC), Vicksburg, MS. At the time of publication, Glenn Rhett served as EMRPP Program Manager. COL Gary E. Johnston was...However, surveys done by Stein et al.(2000), Chaplin et al.(2000), Cole (2010) and others indicate that need is unevenly distributed among regions...relative security in a linear way; i.e., G5 species are not five times as secure as G1 species. The security rank is determined primarily by species

  12. Resilient National Security and Emergency Preparedness Communications: Service Metrics

    DTIC Science & Technology

    2015-10-01

    the East Coast of the United States after a tornado . Emergency management services are immediately called into duty to protect, provide, and secure...channel, radios that are vulnerable to congestion during heavy usage. During a tornado emergency prior to the hurricane, in a period of only 10...In terms of Survivability, the probability of a tornado occurring in any given year in this location is P( tornado ) = 0.05. The

  13. Seeking Balance in Cyber Education

    DTIC Science & Technology

    2015-02-01

    properties that can be applied to computer systems, networks, and software. For example, in our Introduction to Cyber Security Course, given to...Below is the submittal schedule for the areas of emphasis we are looking for: Data Mining in Metrics? Jul/ JAug 2015 Issue Submission Deadline: Feb...Phone Arena. PhoneArena.com, 12 Nov. 2013. Web. 08 Aug. 2014. 8. Various. “SI110: Introduction to Cyber Security, Technical Foundations.” SI110

  14. Augmented Performance Environment for Enhancing Interagency Coordination in Stability, Security, Transition, and Reconstruction (SSTR) Operations

    DTIC Science & Technology

    2009-02-01

    assessments and meeting rehearsal and individual learning materials • Specify the metrics to be used to capture the quality of interagency...government; • Improve security; and • Promote reconstruction (Barno, 2004; Dziedzic & Siedl, 2005; Center for Army Lessons Learned (CALL), 2007). The...their orientations and creating group-level, hierarchical orientations out of the aggregated individual orientations (Wan, Chiu, Peng, & Tam , 2007

  15. Design and Analysis of Optimization Algorithms to Minimize Cryptographic Processing in BGP Security Protocols.

    PubMed

    Sriram, Vinay K; Montgomery, Doug

    2017-07-01

    The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.

  16. A study on strategic provisioning of cloud computing services.

    PubMed

    Whaiduzzaman, Md; Haque, Mohammad Nazmul; Rejaul Karim Chowdhury, Md; Gani, Abdullah

    2014-01-01

    Cloud computing is currently emerging as an ever-changing, growing paradigm that models "everything-as-a-service." Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified.

  17. A Study on Strategic Provisioning of Cloud Computing Services

    PubMed Central

    Rejaul Karim Chowdhury, Md

    2014-01-01

    Cloud computing is currently emerging as an ever-changing, growing paradigm that models “everything-as-a-service.” Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified. PMID:25032243

  18. A framework for linking cybersecurity metrics to the modeling of macroeconomic interdependencies.

    PubMed

    Santos, Joost R; Haimes, Yacov Y; Lian, Chenyang

    2007-10-01

    Hierarchical decision making is a multidimensional process involving management of multiple objectives (with associated metrics and tradeoffs in terms of costs, benefits, and risks), which span various levels of a large-scale system. The nation is a hierarchical system as it consists multiple classes of decisionmakers and stakeholders ranging from national policymakers to operators of specific critical infrastructure subsystems. Critical infrastructures (e.g., transportation, telecommunications, power, banking, etc.) are highly complex and interconnected. These interconnections take the form of flows of information, shared security, and physical flows of commodities, among others. In recent years, economic and infrastructure sectors have become increasingly dependent on networked information systems for efficient operations and timely delivery of products and services. In order to ensure the stability, sustainability, and operability of our critical economic and infrastructure sectors, it is imperative to understand their inherent physical and economic linkages, in addition to their cyber interdependencies. An interdependency model based on a transformation of the Leontief input-output (I-O) model can be used for modeling: (1) the steady-state economic effects triggered by a consumption shift in a given sector (or set of sectors); and (2) the resulting ripple effects to other sectors. The inoperability metric is calculated for each sector; this is achieved by converting the economic impact (typically in monetary units) into a percentage value relative to the size of the sector. Disruptive events such as terrorist attacks, natural disasters, and large-scale accidents have historically shown cascading effects on both consumption and production. Hence, a dynamic model extension is necessary to demonstrate the interplay between combined demand and supply effects. The result is a foundational framework for modeling cybersecurity scenarios for the oil and gas sector. A hypothetical case study examines a cyber attack that causes a 5-week shortfall in the crude oil supply in the Gulf Coast area.

  19. Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, N. C.; Taylor, P. C.

    2014-12-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to overproduce precipitation, this result could indicate that the metric is effective in identifying models which simulate more realistic precipitation. Ultimately, the goal of the framework is to identify performance metrics for advising better methods for ensemble averaging models and create better climate predictions.

  20. Using principal component analysis for selecting network behavioral anomaly metrics

    NASA Astrophysics Data System (ADS)

    Gregorio-de Souza, Ian; Berk, Vincent; Barsamian, Alex

    2010-04-01

    This work addresses new approaches to behavioral analysis of networks and hosts for the purposes of security monitoring and anomaly detection. Most commonly used approaches simply implement anomaly detectors for one, or a few, simple metrics and those metrics can exhibit unacceptable false alarm rates. For instance, the anomaly score of network communication is defined as the reciprocal of the likelihood that a given host uses a particular protocol (or destination);this definition may result in an unrealistically high threshold for alerting to avoid being flooded by false positives. We demonstrate that selecting and adapting the metrics and thresholds, on a host-by-host or protocol-by-protocol basis can be done by established multivariate analyses such as PCA. We will show how to determine one or more metrics, for each network host, that records the highest available amount of information regarding the baseline behavior, and shows relevant deviances reliably. We describe the methodology used to pick from a large selection of available metrics, and illustrate a method for comparing the resulting classifiers. Using our approach we are able to reduce the resources required to properly identify misbehaving hosts, protocols, or networks, by dedicating system resources to only those metrics that actually matter in detecting network deviations.

  1. Limited view angle iterative CT reconstruction

    NASA Astrophysics Data System (ADS)

    Kisner, Sherman J.; Haneda, Eri; Bouman, Charles A.; Skatter, Sondre; Kourinny, Mikhail; Bedford, Simon

    2012-03-01

    Computed Tomography (CT) is widely used for transportation security to screen baggage for potential threats. For example, many airports use X-ray CT to scan the checked baggage of airline passengers. The resulting reconstructions are then used for both automated and human detection of threats. Recently, there has been growing interest in the use of model-based reconstruction techniques for application in CT security systems. Model-based reconstruction offers a number of potential advantages over more traditional direct reconstruction such as filtered backprojection (FBP). Perhaps one of the greatest advantages is the potential to reduce reconstruction artifacts when non-traditional scan geometries are used. For example, FBP tends to produce very severe streaking artifacts when applied to limited view data, which can adversely affect subsequent processing such as segmentation and detection. In this paper, we investigate the use of model-based reconstruction in conjunction with limited-view scanning architectures, and we illustrate the value of these methods using transportation security examples. The advantage of limited view architectures is that it has the potential to reduce the cost and complexity of a scanning system, but its disadvantage is that limited-view data can result in structured artifacts in reconstructed images. Our method of reconstruction depends on the formulation of both a forward projection model for the system, and a prior model that accounts for the contents and densities of typical baggage. In order to evaluate our new method, we use realistic models of baggage with randomly inserted simple simulated objects. Using this approach, we show that model-based reconstruction can substantially reduce artifacts and improve important metrics of image quality such as the accuracy of the estimated CT numbers.

  2. Trust models for efficient communication in Mobile Cloud Computing and their applications to e-Commerce

    NASA Astrophysics Data System (ADS)

    Pop, Florin; Dobre, Ciprian; Mocanu, Bogdan-Costel; Citoteanu, Oana-Maria; Xhafa, Fatos

    2016-11-01

    Managing the large dimensions of data processed in distributed systems that are formed by datacentres and mobile devices has become a challenging issue with an important impact on the end-user. Therefore, the management process of such systems can be achieved efficiently by using uniform overlay networks, interconnected through secure and efficient routing protocols. The aim of this article is to advance our previous work with a novel trust model based on a reputation metric that actively uses the social links between users and the model of interaction between them. We present and evaluate an adaptive model for the trust management in structured overlay networks, based on a Mobile Cloud architecture and considering a honeycomb overlay. Such a model can be useful for supporting advanced mobile market-share e-Commerce platforms, where users collaborate and exchange reliable information about, for example, products of interest and supporting ad-hoc business campaigns

  3. United States Foreign Assistance Programs: The Requirement of Metrics for Security Assistance and Security Cooperation Programs

    DTIC Science & Technology

    2012-06-01

    Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Romania , Slovakia, and Slovenia. 26 The seven PfP Eastern European countries, as...The Soviet experience has left an indelible mark in Ukrainian “identity, politics, economics and even religion ”127 and this experience looms large in...from Tsarist Russia to the Soviet Union from 1918 to 1921.186 The Russian and Soviet influences, along with previous Persian and Ottoman cultures

  4. Sensor data security level estimation scheme for wireless sensor networks.

    PubMed

    Ramos, Alex; Filho, Raimir Holanda

    2015-01-19

    Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates.

  5. Sensor Data Security Level Estimation Scheme for Wireless Sensor Networks

    PubMed Central

    Ramos, Alex; Filho, Raimir Holanda

    2015-01-01

    Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates. PMID:25608215

  6. Medical data sheet in safe havens - A tri-layer cryptic solution.

    PubMed

    Praveenkumar, Padmapriya; Amirtharajan, Rengarajan; Thenmozhi, K; Balaguru Rayappan, John Bosco

    2015-07-01

    Secured sharing of the diagnostic reports and scan images of patients among doctors with complementary expertise for collaborative treatment will help to provide maximum care through faster and decisive decisions. In this context, a tri-layer cryptic solution has been proposed and implemented on Digital Imaging and Communications in Medicine (DICOM) images to establish a secured communication for effective referrals among peers without compromising the privacy of patients. In this approach, a blend of three cryptic schemes, namely Latin square image cipher (LSIC), discrete Gould transform (DGT) and Rubik׳s encryption, has been adopted. Among them, LSIC provides better substitution, confusion and shuffling of the image blocks; DGT incorporates tamper proofing with authentication; and Rubik renders a permutation of DICOM image pixels. The developed algorithm has been successfully implemented and tested in both the software (MATLAB 7) and hardware Universal Software Radio Peripheral (USRP) environments. Specifically, the encrypted data were tested by transmitting them through an additive white Gaussian noise (AWGN) channel model. Furthermore, the sternness of the implemented algorithm was validated by employing standard metrics such as the unified average changing intensity (UACI), number of pixels change rate (NPCR), correlation values and histograms. The estimated metrics have also been compared with the existing methods and dominate in terms of large key space to defy brute force attack, cropping attack, strong key sensitivity and uniform pixel value distribution on encryption. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Metrics for Identifying Food Security Status and the Population with Potential to Benefit from Nutrition Interventions in the Lives Saved Tool (LiST).

    PubMed

    Jackson, Bianca D; Walker, Neff; Heidkamp, Rebecca

    2017-11-01

    Background: The Lives Saved Tool (LiST) uses the poverty head-count ratio at $1.90/d as a proxy for food security to identify the percentage of the population with the potential to benefit from balanced energy supplementation and complementary feeding (CF) interventions, following the approach used for the Lancet 's 2008 series on Maternal and Child Undernutrition. Because much work has been done in the development of food security indicators, a re-evaluation of the use of this indicator was warranted. Objective: The aim was to re-evaluate the use of the poverty head-count ratio at $1.90/d as the food security proxy indicator in LiST. Methods: We carried out a desk review to identify available indicators of food security. We identified 3 indicators and compared them by using scatterplots, Spearman's correlations, and Bland-Altman plot analysis. We generated LiST projections to compare the modeled impact results with the use of the different indicators. Results: There are many food security indicators available, but only 3 additional indicators were identified with the data availability requirements to be used as the food security indicator in LiST. As expected, analyzed food security indicators were significantly positively correlated ( P < 0.001), but there was generally poor agreement between them. The disparity between the indicators also increases as the values of the indicators increase. Consequently, the choice of indicator can have a considerable effect on the impact of interventions modeled in LiST, especially in food-insecure contexts. Conclusions: There was no single indicator identified that is ideal for measuring the percentage of the population who is food insecure for LiST. Thus, LiST will use the food security indicators that were used in the meta-analyses that produced the effect estimates. These are the poverty head-count ratio at $1.90/d for CF interventions and the prevalence of a low body mass index in women of reproductive age for balanced energy supplementation interventions. © 2017 American Society for Nutrition.

  8. Report: Improvements Needed in CSB’s Identity and Access Management and Incident Response Security Functions

    EPA Pesticide Factsheets

    Report #18-P-0030, October 30, 2017. Weaknesses in the Identity and Access Management and Incident Response metric domains leave the CSB vulnerable to attacks occurring and not being detected in a timely manner.

  9. R2U2: Monitoring and Diagnosis of Security Threats for Unmanned Aerial Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Moosbruger, Patrick; Rozier, Kristin Y.

    2015-01-01

    We present R2U2, a novel framework for runtime monitoring of security properties and diagnosing of security threats on-board Unmanned Aerial Systems (UAS). R2U2, implemented in FPGA hardware, is a real-time, REALIZABLE, RESPONSIVE, UNOBTRUSIVE Unit for security threat detection. R2U2 is designed to continuously monitor inputs from the GPS and the ground control station, sensor readings, actuator outputs, and flight software status. By simultaneously monitoring and performing statistical reasoning, attack patterns and post-attack discrepancies in the UAS behavior can be detected. R2U2 uses runtime observer pairs for linear and metric temporal logics for property monitoring and Bayesian networks for diagnosis of security threats. We discuss the design and implementation that now enables R2U2 to handle security threats and present simulation results of several attack scenarios on the NASA DragonEye UAS.

  10. Creating "Intelligent" Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, Noel; Taylor, Patrick

    2014-05-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and several radiative forcing Representative Concentration Pathway (RCP) scenarios. Ultimately, the goal of the framework is to advise better methods for ensemble averaging models and create better climate predictions.

  11. A Complex Systems Approach to More Resilient Multi-Layered Security Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Nathanael J. K.; Jones, Katherine A.; Bandlow, Alisa

    In July 2012, protestors cut through security fences and gained access to the Y-12 National Security Complex. This was believed to be a highly reliable, multi-layered security system. This report documents the results of a Laboratory Directed Research and Development (LDRD) project that created a consistent, robust mathematical framework using complex systems analysis algorithms and techniques to better understand the emergent behavior, vulnerabilities and resiliency of multi-layered security systems subject to budget constraints and competing security priorities. Because there are several dimensions to security system performance and a range of attacks that might occur, the framework is multi-objective for amore » performance frontier to be estimated. This research explicitly uses probability of intruder interruption given detection (P I) as the primary resilience metric. We demonstrate the utility of this framework with both notional as well as real-world examples of Physical Protection Systems (PPSs) and validate using a well-established force-on-force simulation tool, Umbra.« less

  12. Exploring objective climate classification for the Himalayan arc and adjacent regions using gridded data sources

    NASA Astrophysics Data System (ADS)

    Forsythe, N.; Blenkinsop, S.; Fowler, H. J.

    2015-05-01

    A three-step climate classification was applied to a spatial domain covering the Himalayan arc and adjacent plains regions using input data from four global meteorological reanalyses. Input variables were selected based on an understanding of the climatic drivers of regional water resource variability and crop yields. Principal component analysis (PCA) of those variables and k-means clustering on the PCA outputs revealed a reanalysis ensemble consensus for eight macro-climate zones. Spatial statistics of input variables for each zone revealed consistent, distinct climatologies. This climate classification approach has potential for enhancing assessment of climatic influences on water resources and food security as well as for characterising the skill and bias of gridded data sets, both meteorological reanalyses and climate models, for reproducing subregional climatologies. Through their spatial descriptors (area, geographic centroid, elevation mean range), climate classifications also provide metrics, beyond simple changes in individual variables, with which to assess the magnitude of projected climate change. Such sophisticated metrics are of particular interest for regions, including mountainous areas, where natural and anthropogenic systems are expected to be sensitive to incremental climate shifts.

  13. Extraction of Rice Phenological Differences under Heavy Metal Stress Using EVI Time-Series from HJ-1A/B Data.

    PubMed

    Liu, Shuyuan; Liu, Xiangnan; Liu, Meiling; Wu, Ling; Ding, Chao; Huang, Zhi

    2017-05-30

    An effective method to monitor heavy metal stress in crops is of critical importance to assure agricultural production and food security. Phenology, as a sensitive indicator of environmental change, can respond to heavy metal stress in crops and remote sensing is an effective method to detect plant phenological changes. This study focused on identifying the rice phenological differences under varied heavy metal stress using EVI (enhanced vegetation index) time-series, which was obtained from HJ-1A/B CCD images and fitted with asymmetric Gaussian model functions. We extracted three phenological periods using first derivative analysis: the tillering period, heading period, and maturation period; and constructed two kinds of metrics with phenological characteristics: date-intervals and time-integrated EVI, to explore the rice phenological differences under mild and severe stress levels. Results indicated that under severe stress the values of the metrics for presenting rice phenological differences in the experimental areas of heavy metal stress were smaller than the ones under mild stress. This finding represents a new method for monitoring heavy metal contamination through rice phenology.

  14. Metrics for evaluating performance and uncertainty of Bayesian network models

    Treesearch

    Bruce G. Marcot

    2012-01-01

    This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...

  15. Assessing Nutritional Diversity of Cropping Systems in African Villages

    PubMed Central

    DeClerck, Fabrice; Diru, Willy; Fanzo, Jessica; Gaynor, Kaitlyn; Lambrecht, Isabel; Mudiope, Joseph; Mutuo, Patrick K.; Nkhoma, Phelire; Siriri, David; Sullivan, Clare; Palm, Cheryl A.

    2011-01-01

    Background In Sub-Saharan Africa, 40% of children under five years in age are chronically undernourished. As new investments and attention galvanize action on African agriculture to reduce hunger, there is an urgent need for metrics that monitor agricultural progress beyond calories produced per capita and address nutritional diversity essential for human health. In this study we demonstrate how an ecological tool, functional diversity (FD), has potential to address this need and provide new insights on nutritional diversity of cropping systems in rural Africa. Methods and Findings Data on edible plant species diversity, food security and diet diversity were collected for 170 farms in three rural settings in Sub-Saharan Africa. Nutritional FD metrics were calculated based on farm species composition and species nutritional composition. Iron and vitamin A deficiency were determined from blood samples of 90 adult women. Nutritional FD metrics summarized the diversity of nutrients provided by the farm and showed variability between farms and villages. Regression of nutritional FD against species richness and expected FD enabled identification of key species that add nutrient diversity to the system and assessed the degree of redundancy for nutrient traits. Nutritional FD analysis demonstrated that depending on the original composition of species on farm or village, adding or removing individual species can have radically different outcomes for nutritional diversity. While correlations between nutritional FD, food and nutrition indicators were not significant at household level, associations between these variables were observed at village level. Conclusion This study provides novel metrics to address nutritional diversity in farming systems and examples of how these metrics can help guide agricultural interventions towards adequate nutrient diversity. New hypotheses on the link between agro-diversity, food security and human nutrition are generated and strategies for future research are suggested calling for integration of agriculture, ecology, nutrition, and socio-economics. PMID:21698127

  16. Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.

    PubMed

    Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen

    2017-06-01

    The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.

  17. Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.

    2012-01-01

    The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.

  18. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  19. Improving Climate Projections Using "Intelligent" Ensembles

    NASA Technical Reports Server (NTRS)

    Baker, Noel C.; Taylor, Patrick C.

    2015-01-01

    Recent changes in the climate system have led to growing concern, especially in communities which are highly vulnerable to resource shortages and weather extremes. There is an urgent need for better climate information to develop solutions and strategies for adapting to a changing climate. Climate models provide excellent tools for studying the current state of climate and making future projections. However, these models are subject to biases created by structural uncertainties. Performance metrics-or the systematic determination of model biases-succinctly quantify aspects of climate model behavior. Efforts to standardize climate model experiments and collect simulation data-such as the Coupled Model Intercomparison Project (CMIP)-provide the means to directly compare and assess model performance. Performance metrics have been used to show that some models reproduce present-day climate better than others. Simulation data from multiple models are often used to add value to projections by creating a consensus projection from the model ensemble, in which each model is given an equal weight. It has been shown that the ensemble mean generally outperforms any single model. It is possible to use unequal weights to produce ensemble means, in which models are weighted based on performance (called "intelligent" ensembles). Can performance metrics be used to improve climate projections? Previous work introduced a framework for comparing the utility of model performance metrics, showing that the best metrics are related to the variance of top-of-atmosphere outgoing longwave radiation. These metrics improve present-day climate simulations of Earth's energy budget using the "intelligent" ensemble method. The current project identifies several approaches for testing whether performance metrics can be applied to future simulations to create "intelligent" ensemble-mean climate projections. It is shown that certain performance metrics test key climate processes in the models, and that these metrics can be used to evaluate model quality in both current and future climate states. This information will be used to produce new consensus projections and provide communities with improved climate projections for urgent decision-making.

  20. Multi-objective optimization for generating a weighted multi-model ensemble

    NASA Astrophysics Data System (ADS)

    Lee, H.

    2017-12-01

    Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.

  1. Metrics for Evaluation of Student Models

    ERIC Educational Resources Information Center

    Pelanek, Radek

    2015-01-01

    Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…

  2. Evaluating Modeled Impact Metrics for Human Health, Agriculture Growth, and Near-Term Climate

    NASA Astrophysics Data System (ADS)

    Seltzer, K. M.; Shindell, D. T.; Faluvegi, G.; Murray, L. T.

    2017-12-01

    Simulated metrics that assess impacts on human health, agriculture growth, and near-term climate were evaluated using ground-based and satellite observations. The NASA GISS ModelE2 and GEOS-Chem models were used to simulate the near-present chemistry of the atmosphere. A suite of simulations that varied by model, meteorology, horizontal resolution, emissions inventory, and emissions year were performed, enabling an analysis of metric sensitivities to various model components. All simulations utilized consistent anthropogenic global emissions inventories (ECLIPSE V5a or CEDS), and an evaluation of simulated results were carried out for 2004-2006 and 2009-2011 over the United States and 2014-2015 over China. Results for O3- and PM2.5-based metrics featured minor differences due to the model resolutions considered here (2.0° × 2.5° and 0.5° × 0.666°) and model, meteorology, and emissions inventory each played larger roles in variances. Surface metrics related to O3 were consistently high biased, though to varying degrees, demonstrating the need to evaluate particular modeling frameworks before O3 impacts are quantified. Surface metrics related to PM2.5 were diverse, indicating that a multimodel mean with robust results are valuable tools in predicting PM2.5-related impacts. Oftentimes, the configuration that captured the change of a metric best over time differed from the configuration that captured the magnitude of the same metric best, demonstrating the challenge in skillfully simulating impacts. These results highlight the strengths and weaknesses of these models in simulating impact metrics related to air quality and near-term climate. With such information, the reliability of historical and future simulations can be better understood.

  3. Knot Security of 5 Metric (USP 2) Sutures: Influence of Knotting Technique, Suture Material, and Incubation Time for 14 and 28 Days in Phosphate Buffered Saline and Inflamed Equine Peritoneal Fluid.

    PubMed

    Sanders, Ruth E; Kearney, Clodagh M; Buckley, Conor T; Jenner, Florien; Brama, Pieter A

    2015-08-01

    To evaluate knot security for 3 knot types created in 3 commonly used 5 metric suture materials incubated in physiological and pathological fluids. In vitro mechanical study. Knotted suture loops (n = 5/group). Loops of 3 different suture materials (glycolide/lactide copolymer; polyglactin 910; polydioxanone) were created around a 20 mm rod using 3 knot types (square [SQ], surgeon's [SK], and triple knot [TK]) and were tested to failure in distraction (6 mm/min) after tying (day 0) and after being incubated for 14 and 28 days in phosphate buffered saline (PBS) or inflamed peritoneal fluid. Failure load (N) and mode were recorded and compared. For polydioxanone, significant differences in force to knot failure were found between SQ and SK/TK but not between SK and TK. The force required to break all constructs increased after incubation in phosphate buffered saline (PBS). With glycolide/lactide copolymer no differences in force to knot failure were observed. With polyglactin 910, a significant difference between SQ and TK was observed, which was not seen between the other knot types. Incubation in inflamed peritoneal fluid caused a larger and more rapid decrease in force required to cause knot failure than incubation in PBS. Mechanical properties of suture materials have significant effects on knot security. For polydioxanone, SQ is insufficient to create a secure knot. Additional wraps above a SK confer extra stability in some materials, but this increase may not be clinically relevant or justifiable. Glycolide/lactide copolymer had excellent knot security. © Copyright 2015 by The American College of Veterinary Surgeons.

  4. Estimating root-zone soil moisture in the West Africa Sahel using remotely sensed rainfall and vegetation

    NASA Astrophysics Data System (ADS)

    McNally, Amy L.

    Agricultural drought is characterized by shortages in precipitation, large differences between actual and potential evapotranspiration, and soil water deficits that impact crop growth and pasture productivity. Rainfall and other agrometeorological gauge networks in Sub-Saharan Africa are inadequate for drought early warning systems and hence, satellite-based estimates of rainfall and vegetation greenness provide the main sources of information. While a number of studies have described the empirical relationship between rainfall and vegetation greenness, these studies lack a process based approach that includes soil moisture storage. In Chapters I and II, I modeled soil moisture using satellite rainfall inputs and developed a new method for estimating soil moisture with NDVI calibrated to in situ and microwave soil moisture observations. By transforming both NDVI and rainfall into estimates of soil moisture I was able to easily compare these two datasets in a physically meaningful way. In Chapter II, I also show how the new NDVI derived soil moisture can be assimilated into a water balance model that calculates an index of crop water stress. Compared to the analogous rainfall derived estimates of soil moisture and crop stress the NDVI derived estimates were better correlated with millet yields. In Chapter III, I developed a metric for defining growing season drought events that negatively impact millet yields. This metric is based on the data and models used in the Chapters I and II. I then use this metric to evaluate the ability of a sophisticated land surface model to detect drought events. The analysis showed that this particular land surface model's soil moisture estimates do have the potential to benefit the food security and drought early warning communities. With a focus on soil moisture, this dissertation introduced new methods that utilized a variety of data and models for agricultural drought monitoring applications. These new methods facilitate a more quantitative, transparent `convergence of evidence' approach to identifying agricultural drought events that lead to food insecurity. Ideally, these new methods will contribute to better famine early warning and the timely delivery of food aid to reduce the human suffering caused by drought.

  5. A software quality model and metrics for risk assessment

    NASA Technical Reports Server (NTRS)

    Hyatt, L.; Rosenberg, L.

    1996-01-01

    A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.

  6. Virtual sea border

    NASA Astrophysics Data System (ADS)

    Ferriere, D.; Rucinski, A.; Jankowski, T.

    2007-04-01

    Establishing a Virtual Sea Border by performing a real-time, satellite-accessible Internet-based bio-metric supported threat assessment of arriving foreign-flagged cargo ships, their management and ownership, their arrival terminal operator and owner, and rewarding proven legitimate operators with an economic incentive for their transparency will simultaneously improve port security and maritime transportation efficiencies.

  7. We've Got High Hopes: Using Hope to Improve Higher Education

    ERIC Educational Resources Information Center

    Acosta, Alan

    2017-01-01

    In recent years, higher education's focus has changed from curricula-focused on student learning to curricula-intended to meet performance metrics and secure institutional resources. Shifting attention from students' learning to students' performance has left many students, faculty, and staff within higher education feeling isolated and…

  8. Measuring Sustainability: Deriving Metrics From A Secure Human-Environment Relationship

    EPA Science Inventory

    The ability of individuals and institutions to take actions that would achieve sustainability is often lost in rhetoric about what it is or isn't and how to measure progress. Typically, sustainability is viewed as an objective and in this capacity efforts are made to identify i...

  9. Biofuel Supply Chains: Impacts, Indicators and Sustainability Metrics(Presentation)

    EPA Science Inventory

    Biofuel supply chains in the United States are expected to expand considerably, in part due to the Energy Independence and Security Act (EISA) of 2007. This law mandates through the EPA’s Renewable Fuel Standard an expansion to 36 billion gallons of renewable fuels per year by 2...

  10. NEW CATEGORICAL METRICS FOR AIR QUALITY MODEL EVALUATION

    EPA Science Inventory

    Traditional categorical metrics used in model evaluations are "clear-cut" measures in that the model's ability to predict an exceedance is defined by a fixed threshold concentration and the metrics are defined by observation-forecast sets that are paired both in space and time. T...

  11. Geospace Environment Modeling 2008-2009 Challenge: Ground Magnetic Field Perturbations

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Kuznetsova, M.; Ridley, A.; Raeder, J.; Vapirev, A.; Weimer, D.; Weigel, R. S.; Wiltberger, M.; Millward, G.; Rastatter, L.; hide

    2011-01-01

    Acquiring quantitative metrics!based knowledge about the performance of various space physics modeling approaches is central for the space weather community. Quantification of the performance helps the users of the modeling products to better understand the capabilities of the models and to choose the approach that best suits their specific needs. Further, metrics!based analyses are important for addressing the differences between various modeling approaches and for measuring and guiding the progress in the field. In this paper, the metrics!based results of the ground magnetic field perturbation part of the Geospace Environment Modeling 2008 2009 Challenge are reported. Predictions made by 14 different models, including an ensemble model, are compared to geomagnetic observatory recordings from 12 different northern hemispheric locations. Five different metrics are used to quantify the model performances for four storm events. It is shown that the ranking of the models is strongly dependent on the type of metric used to evaluate the model performance. None of the models rank near or at the top systematically for all used metrics. Consequently, one cannot pick the absolute winner : the choice for the best model depends on the characteristics of the signal one is interested in. Model performances vary also from event to event. This is particularly clear for root!mean!square difference and utility metric!based analyses. Further, analyses indicate that for some of the models, increasing the global magnetohydrodynamic model spatial resolution and the inclusion of the ring current dynamics improve the models capability to generate more realistic ground magnetic field fluctuations.

  12. A novel spatial performance metric for robust pattern optimization of distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Demirel, C.; Koch, J.

    2017-12-01

    Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing platforms. We see great potential of spaef across environmental disciplines dealing with spatially distributed modelling.

  13. The SPAtial EFficiency metric (SPAEF): multiple-component evaluation of spatial patterns for optimization of hydrological models

    NASA Astrophysics Data System (ADS)

    Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon

    2018-05-01

    The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.

  14. Persuasive communication: A theoretical model for changing the attitude of preservice elementary teachers toward metric conversion

    NASA Astrophysics Data System (ADS)

    Shrigley, Robert L.

    This study was based on Hovland's four-part statement, Who says what to whom with what effect, the rationale for persuasive communication, a theoretical model for modifying attitudes. Part I was a survey of 139 perservice elementary teachers from which were generated the more credible characteristics of metric instructors, a central element in the who component of Hovland's model. They were: (1) background in mathematics and science, (2) fluency in metrics, (3) capability of thinking metrically, (4) a record of excellent teaching, (5) previous teaching of metric measurement to children, (6) responsibility for teaching metric content in methods courses and (7) an open enthusiasm for metric conversion. Part II was a survey of 45 mathematics educators where belief statements were synthesized for the what component of Hovland's model. It found that math educators support metric measurement because: (1) it is consistent with our monetary system; (2) the conversion of units is easier into metric than English; (3) it is easier to teach and easier to learn than English measurement; there is less need for common fractions; (4) most nations use metric measurement; scientists have used it for decades; (5) American industry has begun to use it; (6) metric measurement will facilitate world trade and communication; and (7) American children will need it as adults; educational agencies are mandating it. With the who and what of Hovland's four-part statement defined, educational researchers now have baseline data to use in testing experimentally the effect of persuasive communication on the attitude of preservice teachers toward metrication.

  15. Comprehensive Quantitative Analysis on Privacy Leak Behavior

    PubMed Central

    Fan, Lejun; Wang, Yuanzhuo; Jin, Xiaolong; Li, Jingyuan; Cheng, Xueqi; Jin, Shuyuan

    2013-01-01

    Privacy information is prone to be leaked by illegal software providers with various motivations. Privacy leak behavior has thus become an important research issue of cyber security. However, existing approaches can only qualitatively analyze privacy leak behavior of software applications. No quantitative approach, to the best of our knowledge, has been developed in the open literature. To fill this gap, in this paper we propose for the first time four quantitative metrics, namely, possibility, severity, crypticity, and manipulability, for privacy leak behavior analysis based on Privacy Petri Net (PPN). In order to compare the privacy leak behavior among different software, we further propose a comprehensive metric, namely, overall leak degree, based on these four metrics. Finally, we validate the effectiveness of the proposed approach using real-world software applications. The experimental results demonstrate that our approach can quantitatively analyze the privacy leak behaviors of various software types and reveal their characteristics from different aspects. PMID:24066046

  16. Comprehensive quantitative analysis on privacy leak behavior.

    PubMed

    Fan, Lejun; Wang, Yuanzhuo; Jin, Xiaolong; Li, Jingyuan; Cheng, Xueqi; Jin, Shuyuan

    2013-01-01

    Privacy information is prone to be leaked by illegal software providers with various motivations. Privacy leak behavior has thus become an important research issue of cyber security. However, existing approaches can only qualitatively analyze privacy leak behavior of software applications. No quantitative approach, to the best of our knowledge, has been developed in the open literature. To fill this gap, in this paper we propose for the first time four quantitative metrics, namely, possibility, severity, crypticity, and manipulability, for privacy leak behavior analysis based on Privacy Petri Net (PPN). In order to compare the privacy leak behavior among different software, we further propose a comprehensive metric, namely, overall leak degree, based on these four metrics. Finally, we validate the effectiveness of the proposed approach using real-world software applications. The experimental results demonstrate that our approach can quantitatively analyze the privacy leak behaviors of various software types and reveal their characteristics from different aspects.

  17. Quantifying the Impact of Unavailability in Cyber-Physical Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aissa, Anis Ben; Abercrombie, Robert K; Sheldon, Federick T.

    2014-01-01

    The Supervisory Control and Data Acquisition (SCADA) system discussed in this work manages a distributed control network for the Tunisian Electric & Gas Utility. The network is dispersed over a large geographic area that monitors and controls the flow of electricity/gas from both remote and centralized locations. The availability of the SCADA system in this context is critical to ensuring the uninterrupted delivery of energy, including safety, security, continuity of operations and revenue. Such SCADA systems are the backbone of national critical cyber-physical infrastructures. Herein, we propose adapting the Mean Failure Cost (MFC) metric for quantifying the cost of unavailability.more » This new metric combines the classic availability formulation with MFC. The resulting metric, so-called Econometric Availability (EA), offers a computational basis to evaluate a system in terms of the gain/loss ($/hour of operation) that affects each stakeholder due to unavailability.« less

  18. Effective monitoring of agriculture: a response.

    PubMed

    Sachs, Jeffrey D; Remans, Roseline; Smukler, Sean M; Winowiecki, Leigh; Andelman, Sandy J; Cassman, Kenneth G; Castle, David; DeFries, Ruth; Denning, Glenn; Fanzo, Jessica; Jackson, Louise E; Leemans, Rik; Lehmann, Johannes; Milder, Jeffrey C; Naeem, Shahid; Nziguheba, Generose; Palm, Cheryl A; Pingali, Prabhu L; Reganold, John P; Richter, Daniel D; Scherr, Sara J; Sircely, Jason; Sullivan, Clare; Tomich, Thomas P; Sanchez, Pedro A

    2012-03-01

    The development of effective agricultural monitoring networks is essential to track, anticipate and manage changes in the social, economic and environmental aspects of agriculture. We welcome the perspective of Lindenmayer and Likens (J. Environ. Monit., 2011, 13, 1559) as published in the Journal of Environmental Monitoring on our earlier paper, "Monitoring the World's Agriculture" (Sachs et al., Nature, 2010, 466, 558-560). In this response, we address their three main critiques labeled as 'the passive approach', 'the problem with uniform metrics' and 'the problem with composite metrics'. We expand on specific research questions at the core of the network design, on the distinction between key universal and site-specific metrics to detect change over time and across scales, and on the need for composite metrics in decision-making. We believe that simultaneously measuring indicators of the three pillars of sustainability (environmentally sound, social responsible and economically viable) in an effectively integrated monitoring system will ultimately allow scientists and land managers alike to find solutions to the most pressing problems facing global food security. This journal is © The Royal Society of Chemistry 2012

  19. A bridge role metric model for nodes in software networks.

    PubMed

    Li, Bo; Feng, Yanli; Ge, Shiyu; Li, Dashe

    2014-01-01

    A bridge role metric model is put forward in this paper. Compared with previous metric models, our solution of a large-scale object-oriented software system as a complex network is inherently more realistic. To acquire nodes and links in an undirected network, a new model that presents the crucial connectivity of a module or the hub instead of only centrality as in previous metric models is presented. Two previous metric models are described for comparison. In addition, it is obvious that the fitting curve between the Bre results and degrees can well be fitted by a power law. The model represents many realistic characteristics of actual software structures, and a hydropower simulation system is taken as an example. This paper makes additional contributions to an accurate understanding of module design of software systems and is expected to be beneficial to software engineering practices.

  20. A Bridge Role Metric Model for Nodes in Software Networks

    PubMed Central

    Li, Bo; Feng, Yanli; Ge, Shiyu; Li, Dashe

    2014-01-01

    A bridge role metric model is put forward in this paper. Compared with previous metric models, our solution of a large-scale object-oriented software system as a complex network is inherently more realistic. To acquire nodes and links in an undirected network, a new model that presents the crucial connectivity of a module or the hub instead of only centrality as in previous metric models is presented. Two previous metric models are described for comparison. In addition, it is obvious that the fitting curve between the results and degrees can well be fitted by a power law. The model represents many realistic characteristics of actual software structures, and a hydropower simulation system is taken as an example. This paper makes additional contributions to an accurate understanding of module design of software systems and is expected to be beneficial to software engineering practices. PMID:25364938

  1. A Discrete Velocity Kinetic Model with Food Metric: Chemotaxis Traveling Waves.

    PubMed

    Choi, Sun-Ho; Kim, Yong-Jung

    2017-02-01

    We introduce a mesoscopic scale chemotaxis model for traveling wave phenomena which is induced by food metric. The organisms of this simplified kinetic model have two discrete velocity modes, [Formula: see text] and a constant tumbling rate. The main feature of the model is that the speed of organisms is constant [Formula: see text] with respect to the food metric, not the Euclidean metric. The uniqueness and the existence of the traveling wave solution of the model are obtained. Unlike the classical logarithmic model case there exist traveling waves under super-linear consumption rates and infinite population pulse-type traveling waves are obtained. Numerical simulations are also provided.

  2. Using Patient Health Questionnaire-9 item parameters of a common metric resulted in similar depression scores compared to independent item response theory model reestimation.

    PubMed

    Liegl, Gregor; Wahl, Inka; Berghöfer, Anne; Nolte, Sandra; Pieh, Christoph; Rose, Matthias; Fischer, Felix

    2016-03-01

    To investigate the validity of a common depression metric in independent samples. We applied a common metrics approach based on item-response theory for measuring depression to four German-speaking samples that completed the Patient Health Questionnaire (PHQ-9). We compared the PHQ item parameters reported for this common metric to reestimated item parameters that derived from fitting a generalized partial credit model solely to the PHQ-9 items. We calibrated the new model on the same scale as the common metric using two approaches (estimation with shifted prior and Stocking-Lord linking). By fitting a mixed-effects model and using Bland-Altman plots, we investigated the agreement between latent depression scores resulting from the different estimation models. We found different item parameters across samples and estimation methods. Although differences in latent depression scores between different estimation methods were statistically significant, these were clinically irrelevant. Our findings provide evidence that it is possible to estimate latent depression scores by using the item parameters from a common metric instead of reestimating and linking a model. The use of common metric parameters is simple, for example, using a Web application (http://www.common-metrics.org) and offers a long-term perspective to improve the comparability of patient-reported outcome measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. To Control False Positives in Gene-Gene Interaction Analysis: Two Novel Conditional Entropy-Based Approaches

    PubMed Central

    Lin, Meihua; Li, Haoli; Zhao, Xiaolei; Qin, Jiheng

    2013-01-01

    Genome-wide analysis of gene-gene interactions has been recognized as a powerful avenue to identify the missing genetic components that can not be detected by using current single-point association analysis. Recently, several model-free methods (e.g. the commonly used information based metrics and several logistic regression-based metrics) were developed for detecting non-linear dependence between genetic loci, but they are potentially at the risk of inflated false positive error, in particular when the main effects at one or both loci are salient. In this study, we proposed two conditional entropy-based metrics to challenge this limitation. Extensive simulations demonstrated that the two proposed metrics, provided the disease is rare, could maintain consistently correct false positive rate. In the scenarios for a common disease, our proposed metrics achieved better or comparable control of false positive error, compared to four previously proposed model-free metrics. In terms of power, our methods outperformed several competing metrics in a range of common disease models. Furthermore, in real data analyses, both metrics succeeded in detecting interactions and were competitive with the originally reported results or the logistic regression approaches. In conclusion, the proposed conditional entropy-based metrics are promising as alternatives to current model-based approaches for detecting genuine epistatic effects. PMID:24339984

  4. Validation of sea ice models using an uncertainty-based distance metric for multiple model variables: NEW METRIC FOR SEA ICE MODEL VALIDATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.

    Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less

  5. Validation of sea ice models using an uncertainty-based distance metric for multiple model variables: NEW METRIC FOR SEA ICE MODEL VALIDATION

    DOE PAGES

    Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.; ...

    2017-04-01

    Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less

  6. On the new metrics for IMRT QA verification.

    PubMed

    Garcia-Romero, Alejandro; Hernandez-Vitoria, Araceli; Millan-Cebrian, Esther; Alba-Escorihuela, Veronica; Serrano-Zabaleta, Sonia; Ortega-Pardina, Pablo

    2016-11-01

    The aim of this work is to search for new metrics that could give more reliable acceptance/rejection criteria on the IMRT verification process and to offer solutions to the discrepancies found among different conventional metrics. Therefore, besides conventional metrics, new ones are proposed and evaluated with new tools to find correlations among them. These new metrics are based on the processing of the dose-volume histogram information, evaluating the absorbed dose differences, the dose constraint fulfillment, or modified biomathematical treatment outcome models such as tumor control probability (TCP) and normal tissue complication probability (NTCP). An additional purpose is to establish whether the new metrics yield the same acceptance/rejection plan distribution as the conventional ones. Fifty eight treatment plans concerning several patient locations are analyzed. All of them were verified prior to the treatment, using conventional metrics, and retrospectively after the treatment with the new metrics. These new metrics include the definition of three continuous functions, based on dose-volume histograms resulting from measurements evaluated with a reconstructed dose system and also with a Monte Carlo redundant calculation. The 3D gamma function for every volume of interest is also calculated. The information is also processed to obtain ΔTCP or ΔNTCP for the considered volumes of interest. These biomathematical treatment outcome models have been modified to increase their sensitivity to dose changes. A robustness index from a radiobiological point of view is defined to classify plans in robustness against dose changes. Dose difference metrics can be condensed in a single parameter: the dose difference global function, with an optimal cutoff that can be determined from a receiver operating characteristics (ROC) analysis of the metric. It is not always possible to correlate differences in biomathematical treatment outcome models with dose difference metrics. This is due to the fact that the dose constraint is often far from the dose that has an actual impact on the radiobiological model, and therefore, biomathematical treatment outcome models are insensitive to big dose differences between the verification system and the treatment planning system. As an alternative, the use of modified radiobiological models which provides a better correlation is proposed. In any case, it is better to choose robust plans from a radiobiological point of view. The robustness index defined in this work is a good predictor of the plan rejection probability according to metrics derived from modified radiobiological models. The global 3D gamma-based metric calculated for each plan volume shows a good correlation with the dose difference metrics and presents a good performance in the acceptance/rejection process. Some discrepancies have been found in dose reconstruction depending on the algorithm employed. Significant and unavoidable discrepancies were found between the conventional metrics and the new ones. The dose difference global function and the 3D gamma for each plan volume are good classifiers regarding dose difference metrics. ROC analysis is useful to evaluate the predictive power of the new metrics. The correlation between biomathematical treatment outcome models and the dose difference-based metrics is enhanced by using modified TCP and NTCP functions that take into account the dose constraints for each plan. The robustness index is useful to evaluate if a plan is likely to be rejected. Conventional verification should be replaced by the new metrics, which are clinically more relevant.

  7. A priori discretization error metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.

  8. Metric half-span model support system

    NASA Technical Reports Server (NTRS)

    Jackson, C. M., Jr.; Dollyhigh, S. M.; Shaw, D. S. (Inventor)

    1982-01-01

    A model support system used to support a model in a wind tunnel test section is described. The model comprises a metric, or measured, half-span supported by a nonmetric, or nonmeasured half-span which is connected to a sting support. Moments and forces acting on the metric half-span are measured without interference from the support system during a wind tunnel test.

  9. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  10. Synchronization of multi-agent systems with metric-topological interactions.

    PubMed

    Wang, Lin; Chen, Guanrong

    2016-09-01

    A hybrid multi-agent systems model integrating the advantages of both metric interaction and topological interaction rules, called the metric-topological model, is developed. This model describes planar motions of mobile agents, where each agent can interact with all the agents within a circle of a constant radius, and can furthermore interact with some distant agents to reach a pre-assigned number of neighbors, if needed. Some sufficient conditions imposed only on system parameters and agent initial states are presented, which ensure achieving synchronization of the whole group of agents. It reveals the intrinsic relationships among the interaction range, the speed, the initial heading, and the density of the group. Moreover, robustness against variations of interaction range, density, and speed are investigated by comparing the motion patterns and performances of the hybrid metric-topological interaction model with the conventional metric-only and topological-only interaction models. Practically in all cases, the hybrid metric-topological interaction model has the best performance in the sense of achieving highest frequency of synchronization, fastest convergent rate, and smallest heading difference.

  11. Information-theoretic model comparison unifies saliency metrics

    PubMed Central

    Kümmerer, Matthias; Wallis, Thomas S. A.; Bethge, Matthias

    2015-01-01

    Learning the properties of an image associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. There is a large literature on quantitative eye movement models that seeks to predict fixations from images (sometimes termed “saliency” prediction). A major problem known to the field is that existing model comparison metrics give inconsistent results, causing confusion. We argue that the primary reason for these inconsistencies is because different metrics and models use different definitions of what a “saliency map” entails. For example, some metrics expect a model to account for image-independent central fixation bias whereas others will penalize a model that does. Here we bring saliency evaluation into the domain of information by framing fixation prediction models probabilistically and calculating information gain. We jointly optimize the scale, the center bias, and spatial blurring of all models within this framework. Evaluating existing metrics on these rephrased models produces almost perfect agreement in model rankings across the metrics. Model performance is separated from center bias and spatial blurring, avoiding the confounding of these factors in model comparison. We additionally provide a method to show where and how models fail to capture information in the fixations on the pixel level. These methods are readily extended to spatiotemporal models of fixation scanpaths, and we provide a software package to facilitate their use. PMID:26655340

  12. 46 CFR 30.10-38 - Lightweight-TB/ALL.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 1 2011-10-01 2011-10-01 false Lightweight-TB/ALL. 30.10-38 Section 30.10-38 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS GENERAL PROVISIONS Definitions § 30.10-38 Lightweight—TB/ALL. The term lightweight means the displacement of a vessel in metric tons without cargo, oil...

  13. 46 CFR 30.10-38 - Lightweight-TB/ALL.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 1 2013-10-01 2013-10-01 false Lightweight-TB/ALL. 30.10-38 Section 30.10-38 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS GENERAL PROVISIONS Definitions § 30.10-38 Lightweight—TB/ALL. The term lightweight means the displacement of a vessel in metric tons without cargo, oil...

  14. 46 CFR 30.10-38 - Lightweight-TB/ALL.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 1 2012-10-01 2012-10-01 false Lightweight-TB/ALL. 30.10-38 Section 30.10-38 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS GENERAL PROVISIONS Definitions § 30.10-38 Lightweight—TB/ALL. The term lightweight means the displacement of a vessel in metric tons without cargo, oil...

  15. 46 CFR 30.10-38 - Lightweight-TB/ALL.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Lightweight-TB/ALL. 30.10-38 Section 30.10-38 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS GENERAL PROVISIONS Definitions § 30.10-38 Lightweight—TB/ALL. The term lightweight means the displacement of a vessel in metric tons without cargo, oil...

  16. 46 CFR 30.10-38 - Lightweight-TB/ALL.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 1 2014-10-01 2014-10-01 false Lightweight-TB/ALL. 30.10-38 Section 30.10-38 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS GENERAL PROVISIONS Definitions § 30.10-38 Lightweight—TB/ALL. The term lightweight means the displacement of a vessel in metric tons without cargo, oil...

  17. 77 FR 42052 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; NYSE Arca, Inc.; Order Instituting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-17

    ... proposed MQP would be a program designed to promote market quality in certain securities listed on NASDAQ... contain the specific quantitative listing requirements for listing on the Global Select, Global Market... pilot (for comparative purposes), volume metrics, NBBO bid/ask spread differentials, LMM participation...

  18. 77 FR 40345 - Proposed Agency Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-09

    ... and Security Act and signed into law on December 19, 2007, was funded for the first time by the ARRA... applied to select 350 grants for study. That random sample of projects will be taken from the six Broad... program impacts and other metrics. Also contains a series of questions related to grant performance not...

  19. Use of Normalized Difference Vegetation Index (NDVI) habitat models to predict breeding birds on the San Pedro River, Arizona

    USGS Publications Warehouse

    McFarland, Tiffany Marie; van Riper, Charles

    2013-01-01

    Successful management practices of avian populations depend on understanding relationships between birds and their habitat, especially in rare habitats, such as riparian areas of the desert Southwest. Remote-sensing technology has become popular in habitat modeling, but most of these models focus on single species, leaving their applicability to understanding broader community structure and function largely untested. We investigated the usefulness of two Normalized Difference Vegetation Index (NDVI) habitat models to model avian abundance and species richness on the upper San Pedro River in southeastern Arizona. Although NDVI was positively correlated with our bird metrics, the amount of explained variation was low. We then investigated the addition of vegetation metrics and other remote-sensing metrics to improve our models. Although both vegetation metrics and remotely sensed metrics increased the power of our models, the overall explained variation was still low, suggesting that general avian community structure may be too complex for NDVI models.

  20. Global groundwater sustainability as a function of reliability, resilience and vulnerability

    NASA Astrophysics Data System (ADS)

    Thomas, B. F.

    2017-12-01

    The world's largest aquifers are a fundamental source of freshwater used for agricultural irrigation and to meet human water needs. Therefore, their stored volume of groundwater are linked with water security, which becomes more relevant during periods of drought. This work focus on understanding large-scale groundwater changes, where we introduce an approach to evaluate groundwater sustainability at a global scale. We employ a groundwater drought index to assess performance metrics of sustainable use (reliability, resilience, vulnerability) for the largest and most productive global aquifers. Spatiotemporal changes in total water storage are derived from remote sensing observations of gravity anomalies, from which the groundwater drought index is inferred. The performance metrics are then combined into a sustainability index. The results reveal a complex relationship between these sustainable use indicators, while considering monthly variability in groundwater storage. Combining the drought and sustainability indexes, as presented in this work, constitutes a measure for quantifying groundwater sustainability. This framework integrates changes in groundwater resources as a function of human influences and climate changes, thus opening a path to assess both progress towards sustainable use and water security.

  1. Can metric-based approaches really improve multi-model climate projections? A perfect model framework applied to summer temperature change in France.

    NASA Astrophysics Data System (ADS)

    Boé, Julien; Terray, Laurent

    2014-05-01

    Ensemble approaches for climate change projections have become ubiquitous. Because of large model-to-model variations and, generally, lack of rationale for the choice of a particular climate model against others, it is widely accepted that future climate change and its impacts should not be estimated based on a single climate model. Generally, as a default approach, the multi-model ensemble mean (MMEM) is considered to provide the best estimate of climate change signals. The MMEM approach is based on the implicit hypothesis that all the models provide equally credible projections of future climate change. This hypothesis is unlikely to be true and ideally one would want to give more weight to more realistic models. A major issue with this alternative approach lies in the assessment of the relative credibility of future climate projections from different climate models, as they can only be evaluated against present-day observations: which present-day metric(s) should be used to decide which models are "good" and which models are "bad" in the future climate? Once a supposedly informative metric has been found, other issues arise. What is the best statistical method to combine multiple models results taking into account their relative credibility measured by a given metric? How to be sure in the end that the metric-based estimate of future climate change is not in fact less realistic than the MMEM? It is impossible to provide strict answers to those questions in the climate change context. Yet, in this presentation, we propose a methodological approach based on a perfect model framework that could bring some useful elements of answer to the questions previously mentioned. The basic idea is to take a random climate model in the ensemble and treat it as if it were the truth (results of this model, in both past and future climate, are called "synthetic observations"). Then, all the other members from the multi-model ensemble are used to derive thanks to a metric-based approach a posterior estimate of climate change, based on the synthetic observation of the metric. Finally, it is possible to compare the posterior estimate to the synthetic observation of future climate change to evaluate the skill of the method. The main objective of this presentation is to describe and apply this perfect model framework to test different methodological issues associated with non-uniform model weighting and similar metric-based approaches. The methodology presented is general, but will be applied to the specific case of summer temperature change in France, for which previous works have suggested potentially useful metrics associated with soil-atmosphere and cloud-temperature interactions. The relative performances of different simple statistical approaches to combine multiple model results based on metrics will be tested. The impact of ensemble size, observational errors, internal variability, and model similarity will be characterized. The potential improvements associated with metric-based approaches compared to the MMEM is terms of errors and uncertainties will be quantified.

  2. Metrics for linear kinematic features in sea ice

    NASA Astrophysics Data System (ADS)

    Levy, G.; Coon, M.; Sulsky, D.

    2006-12-01

    The treatment of leads as cracks or discontinuities (see Coon et al. presentation) requires some shift in the procedure of evaluation and comparison of lead-resolving models and their validation against observations. Common metrics used to evaluate ice model skills are by and large an adaptation of a least square "metric" adopted from operational numerical weather prediction data assimilation systems and are most appropriate for continuous fields and Eilerian systems where the observations and predictions are commensurate. However, this class of metrics suffers from some flaws in areas of sharp gradients and discontinuities (e.g., leads) and when Lagrangian treatments are more natural. After a brief review of these metrics and their performance in areas of sharp gradients, we present two new metrics specifically designed to measure model accuracy in representing linear features (e.g., leads). The indices developed circumvent the requirement that both the observations and model variables be commensurate (i.e., measured with the same units) by considering the frequencies of the features of interest/importance. We illustrate the metrics by scoring several hypothetical "simulated" discontinuity fields against the lead interpreted from RGPS observations.

  3. A new look at mobility metrics for pyroclastic density currents: collection, interpretation, and use

    NASA Astrophysics Data System (ADS)

    Ogburn, S. E.; Lopes, D.; Calder, E. S.

    2012-12-01

    Mitigation of risk associated with pyroclastic density currents (PDCs) depends upon accurate forecasting of possible flow paths, often using empirical models that rely on mobility metrics or the stochastic application of computational flow models. Mobility metrics often inform computational models, sometimes as direct model inputs (e.g. energy cone model), or as estimates for input parameters (e.g. basal friction parameter in TITAN2D). These mobility metrics are often compiled from PDCs at many volcanoes, generalized to reveal empirical constants, or sampled for use in probabilistic models. In practice, however, there are often inconsistencies in how mobility metrics have been collected, reported, and used. For instance, the runout of PDCs often varies depending on the method used (e.g. manually measured from a paper map, automated using GIS software); and the distance traveled by the center of mass of PDCs is rarely reported due to the difficulty in locating it. This work reexamines the way we measure, report, and analyze PDC mobility metrics. Several metrics, such as the Heim coefficient (height dropped/runout, H/L) and the proportionality of inundated area to volume (A∝V2/3) have been used successfully with PDC data (Sparks 1976; Nairn and Self 1977; Sheridan 1979; Hayashi and Self 1992; Calder et al. 1999; Widiwijayanti et al. 2008) in addition to the non-volcanic flows they were originally developed for. Other mobility metrics have been investigated by the debris avalanche community but have not yet been extensively applied to pyroclastic flows (e.g. the initial aspect ratio of collapsing pile). We investigate the relative merits and suitability of contrasting mobility metrics for different types of PDCs (e.g. dome-collapse pyroclastic flows, ash-cloud surges, pumice flows), and indicate certain circumstances under which each model performs optimally. We show that these metrics can be used (with varying success) to predict the runout of a PDC of given volume, or vice versa. The problem of locating the center of mass of PDCs is also investigated by comparing field measurements, geometric centroids, linear thickness models, and computational flow models. Comparing center of mass measurements with runout provides insight into the relative roles of sliding vs. spreading in PDC emplacement. The effect of topography on mobility is explored by comparing mobility metrics to valley morphology measurements, including sinuosity, cross-sectional area, and valley slope. Lastly, we examine the problem of compiling and generalizing mobility data from worldwide databases using a hierarchical Bayes model for weighting mobility metrics for use as model inputs, which offers an improved method over simple space-filling strategies. This is especially useful for calibrating models at data-sparse volcanoes.

  4. Comparing masked target transform volume (MTTV) clutter metric to human observer evaluation of visual clutter

    NASA Astrophysics Data System (ADS)

    Camp, H. A.; Moyer, Steven; Moore, Richard K.

    2010-04-01

    The Night Vision and Electronic Sensors Directorate's current time-limited search (TLS) model, which makes use of the targeting task performance (TTP) metric to describe image quality, does not explicitly account for the effects of visual clutter on observer performance. The TLS model is currently based on empirical fits to describe human performance for a time of day, spectrum and environment. Incorporating a clutter metric into the TLS model may reduce the number of these empirical fits needed. The masked target transform volume (MTTV) clutter metric has been previously presented and compared to other clutter metrics. Using real infrared imagery of rural images with varying levels of clutter, NVESD is currently evaluating the appropriateness of the MTTV metric. NVESD had twenty subject matter experts (SME) rank the amount of clutter in each scene in a series of pair-wise comparisons. MTTV metric values were calculated and then compared to the SME observers rankings. The MTTV metric ranked the clutter in a similar manner to the SME evaluation, suggesting that the MTTV metric may emulate SME response. This paper is a first step in quantifying clutter and measuring the agreement to subjective human evaluation.

  5. Research on quality metrics of wireless adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Li, Xuefei

    2018-04-01

    With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.

  6. Modeled hydrologic metrics show links between hydrology and the functional composition of stream assemblages.

    PubMed

    Patrick, Christopher J; Yuan, Lester L

    2017-07-01

    Flow alteration is widespread in streams, but current understanding of the effects of differences in flow characteristics on stream biological communities is incomplete. We tested hypotheses about the effect of variation in hydrology on stream communities by using generalized additive models to relate watershed information to the values of different flow metrics at gauged sites. Flow models accounted for 54-80% of the spatial variation in flow metric values among gauged sites. We then used these models to predict flow metrics in 842 ungauged stream sites in the mid-Atlantic United States that were sampled for fish, macroinvertebrates, and environmental covariates. Fish and macroinvertebrate assemblages were characterized in terms of a suite of metrics that quantified aspects of community composition, diversity, and functional traits that were expected to be associated with differences in flow characteristics. We related modeled flow metrics to biological metrics in a series of stressor-response models. Our analyses identified both drying and base flow instability as explaining 30-50% of the observed variability in fish and invertebrate community composition. Variations in community composition were related to variations in the prevalence of dispersal traits in invertebrates and trophic guilds in fish. The results demonstrate that we can use statistical models to predict hydrologic conditions at bioassessment sites, which, in turn, we can use to estimate relationships between flow conditions and biological characteristics. This analysis provides an approach to quantify the effects of spatial variation in flow metrics using readily available biomonitoring data. © 2017 by the Ecological Society of America.

  7. MJO simulation in CMIP5 climate models: MJO skill metrics and process-oriented diagnosis

    NASA Astrophysics Data System (ADS)

    Ahn, Min-Seop; Kim, Daehyun; Sperber, Kenneth R.; Kang, In-Sik; Maloney, Eric; Waliser, Duane; Hendon, Harry

    2017-12-01

    The Madden-Julian Oscillation (MJO) simulation diagnostics developed by MJO Working Group and the process-oriented MJO simulation diagnostics developed by MJO Task Force are applied to 37 Coupled Model Intercomparison Project phase 5 (CMIP5) models in order to assess model skill in representing amplitude, period, and coherent eastward propagation of the MJO, and to establish a link between MJO simulation skill and parameterized physical processes. Process-oriented diagnostics include the Relative Humidity Composite based on Precipitation (RHCP), Normalized Gross Moist Stability (NGMS), and the Greenhouse Enhancement Factor (GEF). Numerous scalar metrics are developed to quantify the results. Most CMIP5 models underestimate MJO amplitude, especially when outgoing longwave radiation (OLR) is used in the evaluation, and exhibit too fast phase speed while lacking coherence between eastward propagation of precipitation/convection and the wind field. The RHCP-metric, indicative of the sensitivity of simulated convection to low-level environmental moisture, and the NGMS-metric, indicative of the efficiency of a convective atmosphere for exporting moist static energy out of the column, show robust correlations with a large number of MJO skill metrics. The GEF-metric, indicative of the strength of the column-integrated longwave radiative heating due to cloud-radiation interaction, is also correlated with the MJO skill metrics, but shows relatively lower correlations compared to the RHCP- and NGMS-metrics. Our results suggest that modifications to processes associated with moisture-convection coupling and the gross moist stability might be the most fruitful for improving simulations of the MJO. Though the GEF-metric exhibits lower correlations with the MJO skill metrics, the longwave radiation feedback is highly relevant for simulating the weak precipitation anomaly regime that may be important for the establishment of shallow convection and the transition to deep convection.

  8. MJO simulation in CMIP5 climate models: MJO skill metrics and process-oriented diagnosis

    DOE PAGES

    Ahn, Min-Seop; Kim, Daehyun; Sperber, Kenneth R.; ...

    2017-03-23

    The Madden-Julian Oscillation (MJO) simulation diagnostics developed by MJO Working Group and the process-oriented MJO simulation diagnostics developed by MJO Task Force are applied to 37 Coupled Model Intercomparison Project phase 5 (CMIP5) models in order to assess model skill in representing amplitude, period, and coherent eastward propagation of the MJO, and to establish a link between MJO simulation skill and parameterized physical processes. Process-oriented diagnostics include the Relative Humidity Composite based on Precipitation (RHCP), Normalized Gross Moist Stability (NGMS), and the Greenhouse Enhancement Factor (GEF). Numerous scalar metrics are developed to quantify the results. Most CMIP5 models underestimate MJOmore » amplitude, especially when outgoing longwave radiation (OLR) is used in the evaluation, and exhibit too fast phase speed while lacking coherence between eastward propagation of precipitation/convection and the wind field. The RHCP-metric, indicative of the sensitivity of simulated convection to low-level environmental moisture, and the NGMS-metric, indicative of the efficiency of a convective atmosphere for exporting moist static energy out of the column, show robust correlations with a large number of MJO skill metrics. The GEF-metric, indicative of the strength of the column-integrated longwave radiative heating due to cloud-radiation interaction, is also correlated with the MJO skill metrics, but shows relatively lower correlations compared to the RHCP- and NGMS-metrics. Our results suggest that modifications to processes associated with moisture-convection coupling and the gross moist stability might be the most fruitful for improving simulations of the MJO. Though the GEF-metric exhibits lower correlations with the MJO skill metrics, the longwave radiation feedback is highly relevant for simulating the weak precipitation anomaly regime that may be important for the establishment of shallow convection and the transition to deep convection.« less

  9. MJO simulation in CMIP5 climate models: MJO skill metrics and process-oriented diagnosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Min-Seop; Kim, Daehyun; Sperber, Kenneth R.

    The Madden-Julian Oscillation (MJO) simulation diagnostics developed by MJO Working Group and the process-oriented MJO simulation diagnostics developed by MJO Task Force are applied to 37 Coupled Model Intercomparison Project phase 5 (CMIP5) models in order to assess model skill in representing amplitude, period, and coherent eastward propagation of the MJO, and to establish a link between MJO simulation skill and parameterized physical processes. Process-oriented diagnostics include the Relative Humidity Composite based on Precipitation (RHCP), Normalized Gross Moist Stability (NGMS), and the Greenhouse Enhancement Factor (GEF). Numerous scalar metrics are developed to quantify the results. Most CMIP5 models underestimate MJOmore » amplitude, especially when outgoing longwave radiation (OLR) is used in the evaluation, and exhibit too fast phase speed while lacking coherence between eastward propagation of precipitation/convection and the wind field. The RHCP-metric, indicative of the sensitivity of simulated convection to low-level environmental moisture, and the NGMS-metric, indicative of the efficiency of a convective atmosphere for exporting moist static energy out of the column, show robust correlations with a large number of MJO skill metrics. The GEF-metric, indicative of the strength of the column-integrated longwave radiative heating due to cloud-radiation interaction, is also correlated with the MJO skill metrics, but shows relatively lower correlations compared to the RHCP- and NGMS-metrics. Our results suggest that modifications to processes associated with moisture-convection coupling and the gross moist stability might be the most fruitful for improving simulations of the MJO. Though the GEF-metric exhibits lower correlations with the MJO skill metrics, the longwave radiation feedback is highly relevant for simulating the weak precipitation anomaly regime that may be important for the establishment of shallow convection and the transition to deep convection.« less

  10. ASN reputation system model

    NASA Astrophysics Data System (ADS)

    Hutchinson, Steve; Erbacher, Robert F.

    2015-05-01

    Network security monitoring is currently challenged by its reliance on human analysts and the inability for tools to generate indications and warnings for previously unknown attacks. We propose a reputation system based on IP address set membership within the Autonomous System Number (ASN) system. Essentially, a metric generated based on the historic behavior, or misbehavior, of nodes within a given ASN can be used to predict future behavior and provide a mechanism to locate network activity requiring inspection. This will provide reinforcement of notifications and warnings and lead to inspection for ASNs known to be problematic even if initial inspection leads to interpretation of the event as innocuous. We developed proof of concept capabilities to generate the IP address to ASN set membership and analyze the impact of the results. These results clearly show that while some ASNs are one-offs with individual or small numbers of misbehaving IP addresses, there are definitive ASNs with a history of long term and wide spread misbehaving IP addresses. These ASNs with long histories are what we are especially interested in and will provide an additional correlation metric for the human analyst and lead to new tools to aid remediation of these IP address blocks.

  11. Relevance of emissions timing in biofuel greenhouse gases and climate impacts.

    PubMed

    Schwietzke, Stefan; Griffin, W Michael; Matthews, H Scott

    2011-10-01

    Employing life cycle greenhouse gas (GHG) emissions as a key performance metric in energy and environmental policy may underestimate actual climate change impacts. Emissions released early in the life cycle cause greater cumulative radiative forcing (CRF) over the next decades than later emissions. Some indicate that ignoring emissions timing in traditional biofuel GHG accounting overestimates the effectiveness of policies supporting corn ethanol by 10-90% due to early land use change (LUC) induced GHGs. We use an IPCC climate model to (1) estimate absolute CRF from U.S. corn ethanol and (2) quantify an emissions timing factor (ETF), which is masked in the traditional GHG accounting. In contrast to earlier analyses, ETF is only 2% (5%) over 100 (50) years of impacts. Emissions uncertainty itself (LUC, fuel production period) is 1-2 orders of magnitude higher, which dwarfs the timing effect. From a GHG accounting perspective, emissions timing adds little to our understanding of the climate impacts of biofuels. However, policy makers should recognize that ETF could significantly decrease corn ethanol's probability of meeting the 20% GHG reduction target in the 2007 Energy Independence and Security Act. The added uncertainty of potentially employing more complex emissions metrics is yet to be quantified.

  12. The influence of vegetation height heterogeneity on forest and woodland bird species richness across the United States.

    PubMed

    Huang, Qiongyu; Swatantran, Anu; Dubayah, Ralph; Goetz, Scott J

    2014-01-01

    Avian diversity is under increasing pressures. It is thus critical to understand the ecological variables that contribute to large scale spatial distribution of avian species diversity. Traditionally, studies have relied primarily on two-dimensional habitat structure to model broad scale species richness. Vegetation vertical structure is increasingly used at local scales. However, the spatial arrangement of vegetation height has never been taken into consideration. Our goal was to examine the efficacies of three-dimensional forest structure, particularly the spatial heterogeneity of vegetation height in improving avian richness models across forested ecoregions in the U.S. We developed novel habitat metrics to characterize the spatial arrangement of vegetation height using the National Biomass and Carbon Dataset for the year 2000 (NBCD). The height-structured metrics were compared with other habitat metrics for statistical association with richness of three forest breeding bird guilds across Breeding Bird Survey (BBS) routes: a broadly grouped woodland guild, and two forest breeding guilds with preferences for forest edge and for interior forest. Parametric and non-parametric models were built to examine the improvement of predictability. Height-structured metrics had the strongest associations with species richness, yielding improved predictive ability for the woodland guild richness models (r(2) = ∼ 0.53 for the parametric models, 0.63 the non-parametric models) and the forest edge guild models (r(2) = ∼ 0.34 for the parametric models, 0.47 the non-parametric models). All but one of the linear models incorporating height-structured metrics showed significantly higher adjusted-r2 values than their counterparts without additional metrics. The interior forest guild richness showed a consistent low association with height-structured metrics. Our results suggest that height heterogeneity, beyond canopy height alone, supplements habitat characterization and richness models of forest bird species. The metrics and models derived in this study demonstrate practical examples of utilizing three-dimensional vegetation data for improved characterization of spatial patterns in species richness.

  13. The Influence of Vegetation Height Heterogeneity on Forest and Woodland Bird Species Richness across the United States

    PubMed Central

    Huang, Qiongyu; Swatantran, Anu; Dubayah, Ralph; Goetz, Scott J.

    2014-01-01

    Avian diversity is under increasing pressures. It is thus critical to understand the ecological variables that contribute to large scale spatial distribution of avian species diversity. Traditionally, studies have relied primarily on two-dimensional habitat structure to model broad scale species richness. Vegetation vertical structure is increasingly used at local scales. However, the spatial arrangement of vegetation height has never been taken into consideration. Our goal was to examine the efficacies of three-dimensional forest structure, particularly the spatial heterogeneity of vegetation height in improving avian richness models across forested ecoregions in the U.S. We developed novel habitat metrics to characterize the spatial arrangement of vegetation height using the National Biomass and Carbon Dataset for the year 2000 (NBCD). The height-structured metrics were compared with other habitat metrics for statistical association with richness of three forest breeding bird guilds across Breeding Bird Survey (BBS) routes: a broadly grouped woodland guild, and two forest breeding guilds with preferences for forest edge and for interior forest. Parametric and non-parametric models were built to examine the improvement of predictability. Height-structured metrics had the strongest associations with species richness, yielding improved predictive ability for the woodland guild richness models (r2 = ∼0.53 for the parametric models, 0.63 the non-parametric models) and the forest edge guild models (r2 = ∼0.34 for the parametric models, 0.47 the non-parametric models). All but one of the linear models incorporating height-structured metrics showed significantly higher adjusted-r2 values than their counterparts without additional metrics. The interior forest guild richness showed a consistent low association with height-structured metrics. Our results suggest that height heterogeneity, beyond canopy height alone, supplements habitat characterization and richness models of forest bird species. The metrics and models derived in this study demonstrate practical examples of utilizing three-dimensional vegetation data for improved characterization of spatial patterns in species richness. PMID:25101782

  14. Development and comparison of metrics for evaluating climate models and estimation of projection uncertainty

    NASA Astrophysics Data System (ADS)

    Ring, Christoph; Pollinger, Felix; Kaspar-Ott, Irena; Hertig, Elke; Jacobeit, Jucundus; Paeth, Heiko

    2017-04-01

    The COMEPRO project (Comparison of Metrics for Probabilistic Climate Change Projections of Mediterranean Precipitation), funded by the Deutsche Forschungsgemeinschaft (DFG), is dedicated to the development of new evaluation metrics for state-of-the-art climate models. Further, we analyze implications for probabilistic projections of climate change. This study focuses on the results of 4-field matrix metrics. Here, six different approaches are compared. We evaluate 24 models of the Coupled Model Intercomparison Project Phase 3 (CMIP3), 40 of CMIP5 and 18 of the Coordinated Regional Downscaling Experiment (CORDEX). In addition to the annual and seasonal precipitation the mean temperature is analysed. We consider both 50-year trend and climatological mean for the second half of the 20th century. For the probabilistic projections of climate change A1b, A2 (CMIP3) and RCP4.5, RCP8.5 (CMIP5,CORDEX) scenarios are used. The eight main study areas are located in the Mediterranean. However, we apply our metrics to globally distributed regions as well. The metrics show high simulation quality of temperature trend and both precipitation and temperature mean for most climate models and study areas. In addition, we find high potential for model weighting in order to reduce uncertainty. These results are in line with other accepted evaluation metrics and studies. The comparison of the different 4-field approaches reveals high correlations for most metrics. The results of the metric-weighted probabilistic density functions of climate change are heterogeneous. We find for different regions and seasons both increases and decreases of uncertainty. The analysis of global study areas is consistent with the regional study areas of the Medeiterrenean.

  15. A guide to calculating habitat-quality metrics to inform conservation of highly mobile species

    USGS Publications Warehouse

    Bieri, Joanna A.; Sample, Christine; Thogmartin, Wayne E.; Diffendorfer, James E.; Earl, Julia E.; Erickson, Richard A.; Federico, Paula; Flockhart, D. T. Tyler; Nicol, Sam; Semmens, Darius J.; Skraber, T.; Wiederholt, Ruscena; Mattsson, Brady J.

    2018-01-01

    Many metrics exist for quantifying the relative value of habitats and pathways used by highly mobile species. Properly selecting and applying such metrics requires substantial background in mathematics and understanding the relevant management arena. To address this multidimensional challenge, we demonstrate and compare three measurements of habitat quality: graph-, occupancy-, and demographic-based metrics. Each metric provides insights into system dynamics, at the expense of increasing amounts and complexity of data and models. Our descriptions and comparisons of diverse habitat-quality metrics provide means for practitioners to overcome the modeling challenges associated with management or conservation of such highly mobile species. Whereas previous guidance for applying habitat-quality metrics has been scattered in diversified tracks of literature, we have brought this information together into an approachable format including accessible descriptions and a modeling case study for a typical example that conservation professionals can adapt for their own decision contexts and focal populations.Considerations for Resource ManagersManagement objectives, proposed actions, data availability and quality, and model assumptions are all relevant considerations when applying and interpreting habitat-quality metrics.Graph-based metrics answer questions related to habitat centrality and connectivity, are suitable for populations with any movement pattern, quantify basic spatial and temporal patterns of occupancy and movement, and require the least data.Occupancy-based metrics answer questions about likelihood of persistence or colonization, are suitable for populations that undergo localized extinctions, quantify spatial and temporal patterns of occupancy and movement, and require a moderate amount of data.Demographic-based metrics answer questions about relative or absolute population size, are suitable for populations with any movement pattern, quantify demographic processes and population dynamics, and require the most data.More real-world examples applying occupancy-based, agent-based, and continuous-based metrics to seasonally migratory species are needed to better understand challenges and opportunities for applying these metrics more broadly.

  16. Predicting the difficulty of pure, strict, epistatic models: metrics for simulated model selection.

    PubMed

    Urbanowicz, Ryan J; Kiralis, Jeff; Fisher, Jonathan M; Moore, Jason H

    2012-09-26

    Algorithms designed to detect complex genetic disease associations are initially evaluated using simulated datasets. Typical evaluations vary constraints that influence the correct detection of underlying models (i.e. number of loci, heritability, and minor allele frequency). Such studies neglect to account for model architecture (i.e. the unique specification and arrangement of penetrance values comprising the genetic model), which alone can influence the detectability of a model. In order to design a simulation study which efficiently takes architecture into account, a reliable metric is needed for model selection. We evaluate three metrics as predictors of relative model detection difficulty derived from previous works: (1) Penetrance table variance (PTV), (2) customized odds ratio (COR), and (3) our own Ease of Detection Measure (EDM), calculated from the penetrance values and respective genotype frequencies of each simulated genetic model. We evaluate the reliability of these metrics across three very different data search algorithms, each with the capacity to detect epistatic interactions. We find that a model's EDM and COR are each stronger predictors of model detection success than heritability. This study formally identifies and evaluates metrics which quantify model detection difficulty. We utilize these metrics to intelligently select models from a population of potential architectures. This allows for an improved simulation study design which accounts for differences in detection difficulty attributed to model architecture. We implement the calculation and utilization of EDM and COR into GAMETES, an algorithm which rapidly and precisely generates pure, strict, n-locus epistatic models.

  17. Performance assessment of nitrate leaching models for highly vulnerable soils used in low-input farming based on lysimeter data.

    PubMed

    Groenendijk, Piet; Heinen, Marius; Klammler, Gernot; Fank, Johann; Kupfersberger, Hans; Pisinaras, Vassilios; Gemitzi, Alexandra; Peña-Haro, Salvador; García-Prats, Alberto; Pulido-Velazquez, Manuel; Perego, Alessia; Acutis, Marco; Trevisan, Marco

    2014-11-15

    The agricultural sector faces the challenge of ensuring food security without an excessive burden on the environment. Simulation models provide excellent instruments for researchers to gain more insight into relevant processes and best agricultural practices and provide tools for planners for decision making support. The extent to which models are capable of reliable extrapolation and prediction is important for exploring new farming systems or assessing the impacts of future land and climate changes. A performance assessment was conducted by testing six detailed state-of-the-art models for simulation of nitrate leaching (ARMOSA, COUPMODEL, DAISY, EPIC, SIMWASER/STOTRASIM, SWAP/ANIMO) for lysimeter data of the Wagna experimental field station in Eastern Austria, where the soil is highly vulnerable to nitrate leaching. Three consecutive phases were distinguished to gain insight in the predictive power of the models: 1) a blind test for 2005-2008 in which only soil hydraulic characteristics, meteorological data and information about the agricultural management were accessible; 2) a calibration for the same period in which essential information on field observations was additionally available to the modellers; and 3) a validation for 2009-2011 with the corresponding type of data available as for the blind test. A set of statistical metrics (mean absolute error, root mean squared error, index of agreement, model efficiency, root relative squared error, Pearson's linear correlation coefficient) was applied for testing the results and comparing the models. None of the models performed good for all of the statistical metrics. Models designed for nitrate leaching in high-input farming systems had difficulties in accurately predicting leaching in low-input farming systems that are strongly influenced by the retention of nitrogen in catch crops and nitrogen fixation by legumes. An accurate calibration does not guarantee a good predictive power of the model. Nevertheless all models were able to identify years and crops with high- and low-leaching rates. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Learning Compositional Shape Models of Multiple Distance Metrics by Information Projection.

    PubMed

    Luo, Ping; Lin, Liang; Liu, Xiaobai

    2016-07-01

    This paper presents a novel compositional contour-based shape model by incorporating multiple distance metrics to account for varying shape distortions or deformations. Our approach contains two key steps: 1) contour feature generation and 2) generative model pursuit. For each category, we first densely sample an ensemble of local prototype contour segments from a few positive shape examples and describe each segment using three different types of distance metrics. These metrics are diverse and complementary with each other to capture various shape deformations. We regard the parameterized contour segment plus an additive residual ϵ as a basic subspace, namely, ϵ -ball, in the sense that it represents local shape variance under the certain distance metric. Using these ϵ -balls as features, we then propose a generative learning algorithm to pursue the compositional shape model, which greedily selects the most representative features under the information projection principle. In experiments, we evaluate our model on several public challenging data sets, and demonstrate that the integration of multiple shape distance metrics is capable of dealing various shape deformations, articulations, and background clutter, hence boosting system performance.

  19. Accounting for regional variation in both natural environment and human disturbance to improve performance of multimetric indices of lotic benthic diatoms.

    PubMed

    Tang, Tao; Stevenson, R Jan; Infante, Dana M

    2016-10-15

    Regional variation in both natural environment and human disturbance can influence performance of ecological assessments. In this study we calculated 5 types of benthic diatom multimetric indices (MMIs) with 3 different approaches to account for variation in ecological assessments. We used: site groups defined by ecoregions or diatom typologies; the same or different sets of metrics among site groups; and unmodeled or modeled MMIs, where models accounted for natural variation in metrics within site groups by calculating an expected reference condition for each metric and each site. We used data from the USEPA's National Rivers and Streams Assessment to calculate the MMIs and evaluate changes in MMI performance. MMI performance was evaluated with indices of precision, bias, responsiveness, sensitivity and relevancy which were respectively measured as MMI variation among reference sites, effects of natural variables on MMIs, difference between MMIs at reference and highly disturbed sites, percent of highly disturbed sites properly classified, and relation of MMIs to human disturbance and stressors. All 5 types of MMIs showed considerable discrimination ability. Using different metrics among ecoregions sometimes reduced precision, but it consistently increased responsiveness, sensitivity, and relevancy. Site specific metric modeling reduced bias and increased responsiveness. Combined use of different metrics among site groups and site specific modeling significantly improved MMI performance irrespective of site grouping approach. Compared to ecoregion site classification, grouping sites based on diatom typologies improved precision, but did not improve overall performance of MMIs if we accounted for natural variation in metrics with site specific models. We conclude that using different metrics among ecoregions and site specific metric modeling improve MMI performance, particularly when used together. Applications of these MMI approaches in ecological assessments introduced a tradeoff with assessment consistency when metrics differed across site groups, but they justified the convenient and consistent use of ecoregions. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Empirical Evaluation of Hunk Metrics as Bug Predictors

    NASA Astrophysics Data System (ADS)

    Ferzund, Javed; Ahsan, Syed Nadeem; Wotawa, Franz

    Reducing the number of bugs is a crucial issue during software development and maintenance. Software process and product metrics are good indicators of software complexity. These metrics have been used to build bug predictor models to help developers maintain the quality of software. In this paper we empirically evaluate the use of hunk metrics as predictor of bugs. We present a technique for bug prediction that works at smallest units of code change called hunks. We build bug prediction models using random forests, which is an efficient machine learning classifier. Hunk metrics are used to train the classifier and each hunk metric is evaluated for its bug prediction capabilities. Our classifier can classify individual hunks as buggy or bug-free with 86 % accuracy, 83 % buggy hunk precision and 77% buggy hunk recall. We find that history based and change level hunk metrics are better predictors of bugs than code level hunk metrics.

  1. Evaluating hydrological model performance using information theory-based metrics

    USDA-ARS?s Scientific Manuscript database

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  2. Association of airborne moisture-indicating microorganisms withbuilding-related symptoms and water damage in 100 U.S. office buildings:Analyses of the U.S. EPA BASE data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendell, Mark J.; Lei, Quanhong; Cozen, Myrna O.

    2003-10-01

    Metrics of culturable airborne microorganisms for either total organisms or suspected harmful subgroups have generally not been associated with symptoms among building occupants. However, the visible presence of moisture damage or mold in residences and other buildings has consistently been associated with respiratory symptoms and other health effects. This relationship is presumably caused by adverse but uncharacterized exposures to moisture-related microbiological growth. In order to assess this hypothesis, we studied relationships in U.S. office buildings between the prevalence of respiratory and irritant symptoms, the concentrations of airborne microorganisms that require moist surfaces on which to grow, and the presence ofmore » visible water damage. For these analyses we used data on buildings, indoor environments, and occupants collected from a representative sample of 100 U.S. office buildings in the U.S. Environmental Protection Agency's Building Assessment Survey and Evaluation (EPA BASE) study. We created 19 alternate metrics, using scales ranging from 3-10 units, that summarized the concentrations of airborne moisture-indicating microorganisms (AMIMOs) as indicators of moisture in buildings. Two were constructed to resemble a metric previously reported to be associated with lung function changes in building occupants; the others were based on another metric from the same group of Finnish researchers, concentration cutpoints from other studies, and professional judgment. We assessed three types of associations: between AMIMO metrics and symptoms in office workers, between evidence of water damage and symptoms, and between water damage and AMIMO metrics. We estimated (as odds ratios (ORs) with 95% confidence intervals) the unadjusted and adjusted associations between the 19 metrics and two types of weekly, work-related symptoms--lower respiratory and mucous membrane--using logistic regression models. Analyses used the original AMIMO metrics and were repeated with simplified dichotomized metrics. The multivariate models adjusted for other potential confounding variables associated with respondents, occupied spaces, buildings, or ventilation systems. Models excluded covariates for moisture-related risks hypothesized to increase AMIMO levels. We also estimated the association of water damage (using variables for specific locations in the study space or building, or summary variables) with the two symptom outcomes. Finally, using selected AMIMO metrics as outcomes, we constructed logistic regression models with observations at the building level to estimate unadjusted and adjusted associations of evident water damage with AMIMO metrics. All original AMIMO metrics showed little overall pattern of unadjusted or adjusted association with either symptom outcome. The 3-category metric resembling that previously used by others, which of all constructed metrics had the largest number of buildings in its top category, was not associated with symptoms in these buildings. However, most metrics with few buildings in their highest category showed increased risk for both symptoms in that category, especially metrics using cutpoints of >100 but <500 colony-forming units (CFU)/m{sup 3} for concentration of total culturable fungi. With AMIMO metrics dichotomized to compare the highest category with all lower categories combined, four metrics had unadjusted ORs between 1.4 and 1.6 for both symptom outcomes. The same four metrics had adjusted ORs of 1.7-2.1 for both symptom outcomes. In models of water damage and symptoms, several specific locations of past water damage had significant associations with outcomes, with ORs ranging from 1.4-1.6. In bivariate models of water damage and selected AMIMO metrics, a number of specific types of water damage and several summary variables for water damage were very strongly associated with AMIMO metrics (significant ORs ranging above 15). Multivariate modeling with the dichotomous AMIMO metrics was not possible due to limited numbers of observations.« less

  3. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Science Inventory

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  4. Going Metric: Is It for You? A Planning Model for Small Manufacturing Companies.

    ERIC Educational Resources Information Center

    Beek, C.; And Others

    This booklet is designed to aid small manufacturing companies in ascertaining the meaning of going metric for their unique circumstances and to guide them in making a smooth conversion to the metric system. First is a brief discussion of what the law says about metrics and what the metric system is. Then what is involved in going metric is…

  5. Manifold Preserving: An Intrinsic Approach for Semisupervised Distance Metric Learning.

    PubMed

    Ying, Shihui; Wen, Zhijie; Shi, Jun; Peng, Yaxin; Peng, Jigen; Qiao, Hong

    2017-05-18

    In this paper, we address the semisupervised distance metric learning problem and its applications in classification and image retrieval. First, we formulate a semisupervised distance metric learning model by considering the metric information of inner classes and interclasses. In this model, an adaptive parameter is designed to balance the inner metrics and intermetrics by using data structure. Second, we convert the model to a minimization problem whose variable is symmetric positive-definite matrix. Third, in implementation, we deduce an intrinsic steepest descent method, which assures that the metric matrix is strictly symmetric positive-definite at each iteration, with the manifold structure of the symmetric positive-definite matrix manifold. Finally, we test the proposed algorithm on conventional data sets, and compare it with other four representative methods. The numerical results validate that the proposed method significantly improves the classification with the same computational efficiency.

  6. Model assessment using a multi-metric ranking technique

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  7. On the use of hidden Markov models for gaze pattern modeling

    NASA Astrophysics Data System (ADS)

    Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph

    2016-05-01

    Some of the conventional metrics derived from gaze patterns (on computer screens) to study visual attention, engagement and fatigue are saccade counts, nearest neighbor index (NNI) and duration of dwells/fixations. Each of these metrics has drawbacks in modeling the behavior of gaze patterns; one such drawback comes from the fact that some portions on the screen are not as important as some other portions on the screen. This is addressed by computing the eye gaze metrics corresponding to important areas of interest (AOI) on the screen. There are some challenges in developing accurate AOI based metrics: firstly, the definition of AOI is always fuzzy; secondly, it is possible that the AOI may change adaptively over time. Hence, there is a need to introduce eye-gaze metrics that are aware of the AOI in the field of view; at the same time, the new metrics should be able to automatically select the AOI based on the nature of the gazes. In this paper, we propose a novel way of computing NNI based on continuous hidden Markov models (HMM) that model the gazes as 2D Gaussian observations (x-y coordinates of the gaze) with the mean at the center of the AOI and covariance that is related to the concentration of gazes. The proposed modeling allows us to accurately compute the NNI metric in the presence of multiple, undefined AOI on the screen in the presence of intermittent casual gazing that is modeled as random gazes on the screen.

  8. 15 CFR 714.1 - Annual declaration requirements for plant sites that produce a Schedule 3 chemical in excess of...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... in the production of a chemical in any units within the same plant through chemical reaction... plant sites that produce a Schedule 3 chemical in excess of 30 metric tons. 714.1 Section 714.1 Commerce... AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS ACTIVITIES INVOLVING...

  9. 15 CFR 714.1 - Annual declaration requirements for plant sites that produce a Schedule 3 chemical in excess of...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... in the production of a chemical in any units within the same plant through chemical reaction... plant sites that produce a Schedule 3 chemical in excess of 30 metric tons. 714.1 Section 714.1 Commerce... AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS ACTIVITIES INVOLVING...

  10. 15 CFR 714.1 - Annual declaration requirements for plant sites that produce a Schedule 3 chemical in excess of...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... in the production of a chemical in any units within the same plant through chemical reaction... plant sites that produce a Schedule 3 chemical in excess of 30 metric tons. 714.1 Section 714.1 Commerce... AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS ACTIVITIES INVOLVING...

  11. 15 CFR 714.1 - Annual declaration requirements for plant sites that produce a Schedule 3 chemical in excess of...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... in the production of a chemical in any units within the same plant through chemical reaction... plant sites that produce a Schedule 3 chemical in excess of 30 metric tons. 714.1 Section 714.1 Commerce... AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS ACTIVITIES INVOLVING...

  12. 15 CFR 714.1 - Annual declaration requirements for plant sites that produce a Schedule 3 chemical in excess of...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... in the production of a chemical in any units within the same plant through chemical reaction... plant sites that produce a Schedule 3 chemical in excess of 30 metric tons. 714.1 Section 714.1 Commerce... AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS ACTIVITIES INVOLVING...

  13. 46 CFR 30.10-20 - Deadweight or DWT-TB/ALL.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 1 2011-10-01 2011-10-01 false Deadweight or DWT-TB/ALL. 30.10-20 Section 30.10-20 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS GENERAL PROVISIONS Definitions § 30.10-20 Deadweight or DWT—TB/ALL. The term deadweight or DWT means the difference in metric tons between...

  14. 46 CFR 30.10-20 - Deadweight or DWT-TB/ALL.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Deadweight or DWT-TB/ALL. 30.10-20 Section 30.10-20 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS GENERAL PROVISIONS Definitions § 30.10-20 Deadweight or DWT—TB/ALL. The term deadweight or DWT means the difference in metric tons between...

  15. 46 CFR 30.10-20 - Deadweight or DWT-TB/ALL.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 1 2013-10-01 2013-10-01 false Deadweight or DWT-TB/ALL. 30.10-20 Section 30.10-20 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS GENERAL PROVISIONS Definitions § 30.10-20 Deadweight or DWT—TB/ALL. The term deadweight or DWT means the difference in metric tons between...

  16. 46 CFR 30.10-20 - Deadweight or DWT-TB/ALL.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 1 2012-10-01 2012-10-01 false Deadweight or DWT-TB/ALL. 30.10-20 Section 30.10-20 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS GENERAL PROVISIONS Definitions § 30.10-20 Deadweight or DWT—TB/ALL. The term deadweight or DWT means the difference in metric tons between...

  17. 46 CFR 30.10-20 - Deadweight or DWT-TB/ALL.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 1 2014-10-01 2014-10-01 false Deadweight or DWT-TB/ALL. 30.10-20 Section 30.10-20 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS GENERAL PROVISIONS Definitions § 30.10-20 Deadweight or DWT—TB/ALL. The term deadweight or DWT means the difference in metric tons between...

  18. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Pesticide Factsheets

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  19. A Comparison of Exposure Metrics for Traffic-Related Air Pollutants: Application to Epidemiology Studies in Detroit, Michigan

    PubMed Central

    Batterman, Stuart; Burke, Janet; Isakov, Vlad; Lewis, Toby; Mukherjee, Bhramar; Robins, Thomas

    2014-01-01

    Vehicles are major sources of air pollutant emissions, and individuals living near large roads endure high exposures and health risks associated with traffic-related air pollutants. Air pollution epidemiology, health risk, environmental justice, and transportation planning studies would all benefit from an improved understanding of the key information and metrics needed to assess exposures, as well as the strengths and limitations of alternate exposure metrics. This study develops and evaluates several metrics for characterizing exposure to traffic-related air pollutants for the 218 residential locations of participants in the NEXUS epidemiology study conducted in Detroit (MI, USA). Exposure metrics included proximity to major roads, traffic volume, vehicle mix, traffic density, vehicle exhaust emissions density, and pollutant concentrations predicted by dispersion models. Results presented for each metric include comparisons of exposure distributions, spatial variability, intraclass correlation, concordance and discordance rates, and overall strengths and limitations. While showing some agreement, the simple categorical and proximity classifications (e.g., high diesel/low diesel traffic roads and distance from major roads) do not reflect the range and overlap of exposures seen in the other metrics. Information provided by the traffic density metric, defined as the number of kilometers traveled (VKT) per day within a 300 m buffer around each home, was reasonably consistent with the more sophisticated metrics. Dispersion modeling provided spatially- and temporally-resolved concentrations, along with apportionments that separated concentrations due to traffic emissions and other sources. While several of the exposure metrics showed broad agreement, including traffic density, emissions density and modeled concentrations, these alternatives still produced exposure classifications that differed for a substantial fraction of study participants, e.g., from 20% to 50% of homes, depending on the metric, would be incorrectly classified into “low”, “medium” or “high” traffic exposure classes. These and other results suggest the potential for exposure misclassification and the need for refined and validated exposure metrics. While data and computational demands for dispersion modeling of traffic emissions are non-trivial concerns, once established, dispersion modeling systems can provide exposure information for both on- and near-road environments that would benefit future traffic-related assessments. PMID:25226412

  20. Angle and Context Free Grammar Based Precarious Node Detection and Secure Data Transmission in MANETs.

    PubMed

    Veerasamy, Anitha; Madane, Srinivasa Rao; Sivakumar, K; Sivaraman, Audithan

    2016-01-01

    Growing attractiveness of Mobile Ad Hoc Networks (MANETs), its features, and usage has led to the launching of threats and attacks to bring negative consequences in the society. The typical features of MANETs, especially with dynamic topology and open wireless medium, may leave MANETs vulnerable. Trust management using uncertain reasoning scheme has previously attempted to solve this problem. However, it produces additional overhead while securing the network. Hence, a Location and Trust-based secure communication scheme (L&TS) is proposed to overcome this limitation. Since the design securing requires more than two data algorithms, the cost of the system goes up. Another mechanism proposed in this paper, Angle and Context Free Grammar (ACFG) based precarious node elimination and secure communication in MANETs, intends to secure data transmission and detect precarious nodes in a MANET at a comparatively lower cost. The Elliptic Curve function is used to isolate a malicious node, thereby incorporating secure data transfer. Simulation results show that the dynamic estimation of the metrics improves throughput by 26% in L&TS when compared to the TMUR. ACFG achieves 33% and 51% throughput increase when compared to L&TS and TMUR mechanisms, respectively.

  1. Ranking streamflow model performance based on Information theory metrics

    NASA Astrophysics Data System (ADS)

    Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas

    2016-04-01

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.

  2. Human-centric predictive model of task difficulty for human-in-the-loop control tasks

    PubMed Central

    Majewicz Fey, Ann

    2018-01-01

    Quantitatively measuring the difficulty of a manipulation task in human-in-the-loop control systems is ill-defined. Currently, systems are typically evaluated through task-specific performance measures and post-experiment user surveys; however, these methods do not capture the real-time experience of human users. In this study, we propose to analyze and predict the difficulty of a bivariate pointing task, with a haptic device interface, using human-centric measurement data in terms of cognition, physical effort, and motion kinematics. Noninvasive sensors were used to record the multimodal response of human user for 14 subjects performing the task. A data-driven approach for predicting task difficulty was implemented based on several task-independent metrics. We compare four possible models for predicting task difficulty to evaluated the roles of the various types of metrics, including: (I) a movement time model, (II) a fusion model using both physiological and kinematic metrics, (III) a model only with kinematic metrics, and (IV) a model only with physiological metrics. The results show significant correlation between task difficulty and the user sensorimotor response. The fusion model, integrating user physiology and motion kinematics, provided the best estimate of task difficulty (R2 = 0.927), followed by a model using only kinematic metrics (R2 = 0.921). Both models were better predictors of task difficulty than the movement time model (R2 = 0.847), derived from Fitt’s law, a well studied difficulty model for human psychomotor control. PMID:29621301

  3. Simulated Models Suggest That Price per Calorie Is the Dominant Price Metric That Low-Income Individuals Use for Food Decision Making123

    PubMed Central

    2016-01-01

    Background: The price of food has long been considered one of the major factors that affects food choices. However, the price metric (e.g., the price of food per calorie or the price of food per gram) that individuals predominantly use when making food choices is unclear. Understanding which price metric is used is especially important for studying individuals with severe budget constraints because food price then becomes even more important in food choice. Objective: We assessed which price metric is used by low-income individuals in deciding what to eat. Methods: With the use of data from NHANES and the USDA Food and Nutrient Database for Dietary Studies, we created an agent-based model that simulated an environment representing the US population, wherein individuals were modeled as agents with a specific weight, age, and income. In our model, agents made dietary food choices while meeting their budget limits with the use of 1 of 3 different metrics for decision making: energy cost (price per calorie), unit price (price per gram), and serving price (price per serving). The food consumption patterns generated by our model were compared to 3 independent data sets. Results: The food choice behaviors observed in 2 of the data sets were found to be closest to the simulated dietary patterns generated by the price per calorie metric. The behaviors observed in the third data set were equidistant from the patterns generated by price per calorie and price per serving metrics, whereas results generated by the price per gram metric were further away. Conclusions: Our simulations suggest that dietary food choice based on price per calorie best matches actual consumption patterns and may therefore be the most salient price metric for low-income populations. PMID:27655757

  4. Simulated Models Suggest That Price per Calorie Is the Dominant Price Metric That Low-Income Individuals Use for Food Decision Making.

    PubMed

    Beheshti, Rahmatollah; Igusa, Takeru; Jones-Smith, Jessica

    2016-11-01

    The price of food has long been considered one of the major factors that affects food choices. However, the price metric (e.g., the price of food per calorie or the price of food per gram) that individuals predominantly use when making food choices is unclear. Understanding which price metric is used is especially important for studying individuals with severe budget constraints because food price then becomes even more important in food choice. We assessed which price metric is used by low-income individuals in deciding what to eat. With the use of data from NHANES and the USDA Food and Nutrient Database for Dietary Studies, we created an agent-based model that simulated an environment representing the US population, wherein individuals were modeled as agents with a specific weight, age, and income. In our model, agents made dietary food choices while meeting their budget limits with the use of 1 of 3 different metrics for decision making: energy cost (price per calorie), unit price (price per gram), and serving price (price per serving). The food consumption patterns generated by our model were compared to 3 independent data sets. The food choice behaviors observed in 2 of the data sets were found to be closest to the simulated dietary patterns generated by the price per calorie metric. The behaviors observed in the third data set were equidistant from the patterns generated by price per calorie and price per serving metrics, whereas results generated by the price per gram metric were further away. Our simulations suggest that dietary food choice based on price per calorie best matches actual consumption patterns and may therefore be the most salient price metric for low-income populations. © 2016 American Society for Nutrition.

  5. A comparative study of multi-focus image fusion validation metrics

    NASA Astrophysics Data System (ADS)

    Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael

    2016-05-01

    Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).

  6. Photogrammetry using Apollo 16 orbital photography, part B

    NASA Technical Reports Server (NTRS)

    Wu, S. S. C.; Schafer, F. J.; Jordan, R.; Nakata, G. M.

    1972-01-01

    Discussion is made of the Apollo 15 and 16 metric and panoramic cameras which provided photographs for accurate topographic portrayal of the lunar surface using photogrammetric methods. Nine stereoscopic models of Apollo 16 metric photographs and three models of panoramic photographs were evaluated photogrammetrically in support of the Apollo 16 geologic investigations. Four of the models were used to collect profile data for crater morphology studies; three models were used to collect evaluation data for the frequency distributions of lunar slopes; one model was used to prepare a map of the Apollo 16 traverse area; and one model was used to determine elevations of the Cayley Formation. The remaining three models were used to test photogrammetric techniques using oblique metric and panoramic camera photographs. Two preliminary contour maps were compiled and a high-oblique metric photograph was rectified.

  7. What's with all this peer-review stuff anyway?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warner, J. S.

    2010-01-01

    The Journal of Physical Security was ostensibly started to deal with a perceived lack of peer-reviewed journals related to the field of physical security. In fact, concerns have been expressed that the field of physical security is scarcely a field at all. A typical, well-developed field might include the following: multiple peer-reviewed journals devoted to the subject, rigor and critical thinking, metrics, fundamental principles, models and theories, effective standards and guidelines, R and D conferences, professional societies, certifications, its own academic department (or at least numerous academic experts), widespread granting of degrees in the field from 4-year research universities, mechanismsmore » for easily spotting 'snake oil' products and services, and the practice of professionals organizing to police themselves, provide quality control, and determine best practices. Physical Security seems to come up short in a number of these areas. Many of these attributes are difficult to quantify. This paper seeks to focus on one area that is quantifiable: the number of peer-reviewed journals dedicated to the field of Physical Security. In addition, I want to examine the number of overall periodicals (peer-reviewed and non-peer-reviewed) dedicated to physical security, as well as the number of papers published each year about physical security. These are potentially useful analyses because one can often infer how healthy or active a given field is by its publishing activity. For example, there are 2,754 periodicals dedicated to the (very healthy and active) field of physics. This paper concentrates on trade journal versus peer-reviewed journals. Trade journals typically focus on practice-related topics. A paper appropriate for a trade journal is usually based more on practical experience than rigorous studies or research. Models, theories, or rigorous experimental research results will usually not be included. A trade journal typically targets a specific market in an industry or trade. Such journals are often considered to be news magazines and may contain industry specific advertisements and/or job ads. A peer-reviewed journal, a.k.a 'referred journal', in contrast, contains peer-reviewed papers. A peer-reviewed paper is one that has been vetted by the peer review process. In this process, the paper is typically sent to independent experts for review and consideration. A peer-reviewed paper might cover experimental results, and/or a rigorous study, analyses, research efforts, theory, models, or one of many other scholarly endeavors.« less

  8. Quality assessment for color reproduction using a blind metric

    NASA Astrophysics Data System (ADS)

    Bringier, B.; Quintard, L.; Larabi, M.-C.

    2007-01-01

    This paper deals with image quality assessment. This field plays nowadays an important role in various image processing applications. Number of objective image quality metrics, that correlate or not, with the subjective quality have been developed during the last decade. Two categories of metrics can be distinguished, the first with full-reference and the second with no-reference. Full-reference metric tries to evaluate the distortion introduced to an image with regards to the reference. No-reference approach attempts to model the judgment of image quality in a blind way. Unfortunately, the universal image quality model is not on the horizon and empirical models established on psychophysical experimentation are generally used. In this paper, we focus only on the second category to evaluate the quality of color reproduction where a blind metric, based on human visual system modeling is introduced. The objective results are validated by single-media and cross-media subjective tests.

  9. Towards Application of NASA Standard for Models and Simulations in Aeronautical Design Process

    NASA Astrophysics Data System (ADS)

    Vincent, Luc; Dunyach, Jean-Claude; Huet, Sandrine; Pelissier, Guillaume; Merlet, Joseph

    2012-08-01

    Even powerful computational techniques like simulation endure limitations in their validity domain. Consequently using simulation models requires cautions to avoid making biased design decisions for new aeronautical products on the basis of inadequate simulation results. Thus the fidelity, accuracy and validity of simulation models shall be monitored in context all along the design phases to build confidence in achievement of the goals of modelling and simulation.In the CRESCENDO project, we adapt the Credibility Assessment Scale method from NASA standard for models and simulations from space programme to the aircraft design in order to assess the quality of simulations. The proposed eight quality assurance metrics aggregate information to indicate the levels of confidence in results. They are displayed in management dashboard and can secure design trade-off decisions at programme milestones.The application of this technique is illustrated in aircraft design context with specific thermal Finite Elements Analysis. This use case shows how to judge the fitness- for-purpose of simulation as Virtual testing means and then green-light the continuation of Simulation Lifecycle Management (SLM) process.

  10. Knowledge, transparency, and refutability in groundwater models, an example from the Death Valley regional groundwater flow system

    USGS Publications Warehouse

    Hill, Mary C.; Faunt, Claudia C.; Belcher, Wayne; Sweetkind, Donald; Tiedeman, Claire; Kavetski, Dmitri

    2013-01-01

    This work demonstrates how available knowledge can be used to build more transparent and refutable computer models of groundwater systems. The Death Valley regional groundwater flow system, which surrounds a proposed site for a high level nuclear waste repository of the United States of America, and the Nevada National Security Site (NNSS), where nuclear weapons were tested, is used to explore model adequacy, identify parameters important to (and informed by) observations, and identify existing old and potential new observations important to predictions. Model development is pursued using a set of fundamental questions addressed with carefully designed metrics. Critical methods include using a hydrogeologic model, managing model nonlinearity by designing models that are robust while maintaining realism, using error-based weighting to combine disparate types of data, and identifying important and unimportant parameters and observations and optimizing parameter values with computationally frugal schemes. The frugal schemes employed in this study require relatively few (10–1000 s), parallelizable model runs. This is beneficial because models able to approximate the complex site geology defensibly tend to have high computational cost. The issue of model defensibility is particularly important given the contentious political issues involved.

  11. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    NASA Astrophysics Data System (ADS)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-10-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  12. Key metrics for HFIR HEU and LEU models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilas, Germina; Betzler, Benjamin R.; Chandler, David

    This report compares key metrics for two fuel design models of the High Flux Isotope Reactor (HFIR). The first model represents the highly enriched uranium (HEU) fuel currently in use at HFIR, and the second model considers a low-enriched uranium (LEU) interim design fuel. Except for the fuel region, the two models are consistent, and both include an experiment loading that is representative of HFIR's current operation. The considered key metrics are the neutron flux at the cold source moderator vessel, the mass of 252Cf produced in the flux trap target region as function of cycle time, the fast neutronmore » flux at locations of interest for material irradiation experiments, and the reactor cycle length. These key metrics are a small subset of the overall HFIR performance and safety metrics. They were defined as a means of capturing data essential for HFIR's primary missions, for use in optimization studies assessing the impact of HFIR's conversion from HEU fuel to different types of LEU fuel designs.« less

  13. Skill Assessment for Coupled Biological/Physical Models of Marine Systems.

    PubMed

    Stow, Craig A; Jolliff, Jason; McGillicuddy, Dennis J; Doney, Scott C; Allen, J Icarus; Friedrichs, Marjorie A M; Rose, Kenneth A; Wallhead, Philip

    2009-02-20

    Coupled biological/physical models of marine systems serve many purposes including the synthesis of information, hypothesis generation, and as a tool for numerical experimentation. However, marine system models are increasingly used for prediction to support high-stakes decision-making. In such applications it is imperative that a rigorous model skill assessment is conducted so that the model's capabilities are tested and understood. Herein, we review several metrics and approaches useful to evaluate model skill. The definition of skill and the determination of the skill level necessary for a given application is context specific and no single metric is likely to reveal all aspects of model skill. Thus, we recommend the use of several metrics, in concert, to provide a more thorough appraisal. The routine application and presentation of rigorous skill assessment metrics will also serve the broader interests of the modeling community, ultimately resulting in improved forecasting abilities as well as helping us recognize our limitations.

  14. A no-reference video quality assessment metric based on ROI

    NASA Astrophysics Data System (ADS)

    Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan

    2015-01-01

    A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.

  15. A condition metric for Eucalyptus woodland derived from expert evaluations.

    PubMed

    Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D

    2018-02-01

    The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.

  16. Metric Evaluation Pipeline for 3d Modeling of Urban Scenes

    NASA Astrophysics Data System (ADS)

    Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.

    2017-05-01

    Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  17. Multi-metric calibration of hydrological model to capture overall flow regimes

    NASA Astrophysics Data System (ADS)

    Zhang, Yongyong; Shao, Quanxi; Zhang, Shifeng; Zhai, Xiaoyan; She, Dunxian

    2016-08-01

    Flow regimes (e.g., magnitude, frequency, variation, duration, timing and rating of change) play a critical role in water supply and flood control, environmental processes, as well as biodiversity and life history patterns in the aquatic ecosystem. The traditional flow magnitude-oriented calibration of hydrological model was usually inadequate to well capture all the characteristics of observed flow regimes. In this study, we simulated multiple flow regime metrics simultaneously by coupling a distributed hydrological model with an equally weighted multi-objective optimization algorithm. Two headwater watersheds in the arid Hexi Corridor were selected for the case study. Sixteen metrics were selected as optimization objectives, which could represent the major characteristics of flow regimes. Model performance was compared with that of the single objective calibration. Results showed that most metrics were better simulated by the multi-objective approach than those of the single objective calibration, especially the low and high flow magnitudes, frequency and variation, duration, maximum flow timing and rating. However, the model performance of middle flow magnitude was not significantly improved because this metric was usually well captured by single objective calibration. The timing of minimum flow was poorly predicted by both the multi-metric and single calibrations due to the uncertainties in model structure and input data. The sensitive parameter values of the hydrological model changed remarkably and the simulated hydrological processes by the multi-metric calibration became more reliable, because more flow characteristics were considered. The study is expected to provide more detailed flow information by hydrological simulation for the integrated water resources management, and to improve the simulation performances of overall flow regimes.

  18. The power metric: a new statistically robust enrichment-type metric for virtual screening applications with early recovery capability.

    PubMed

    Lopes, Julio Cesar Dias; Dos Santos, Fábio Mendes; Martins-José, Andrelly; Augustyns, Koen; De Winter, Hans

    2017-01-01

    A new metric for the evaluation of model performance in the field of virtual screening and quantitative structure-activity relationship applications is described. This metric has been termed the power metric and is defined as the fraction of the true positive rate divided by the sum of the true positive and false positive rates, for a given cutoff threshold. The performance of this metric is compared with alternative metrics such as the enrichment factor, the relative enrichment factor, the receiver operating curve enrichment factor, the correct classification rate, Matthews correlation coefficient and Cohen's kappa coefficient. The performance of this new metric is found to be quite robust with respect to variations in the applied cutoff threshold and ratio of the number of active compounds to the total number of compounds, and at the same time being sensitive to variations in model quality. It possesses the correct characteristics for its application in early-recognition virtual screening problems.

  19. A Geometric Approach to Modeling Microstructurally Small Fatigue Crack Formation. 2; Simulation and Prediction of Crack Nucleation in AA 7075-T651

    NASA Technical Reports Server (NTRS)

    Hochhalter, Jake D.; Littlewood, David J.; Christ, Robert J., Jr.; Veilleux, M. G.; Bozek, J. E.; Ingraffea, A. R.; Maniatty, Antionette M.

    2010-01-01

    The objective of this paper is to develop further a framework for computationally modeling microstructurally small fatigue crack growth in AA 7075-T651 [1]. The focus is on the nucleation event, when a crack extends from within a second-phase particle into a surrounding grain, since this has been observed to be an initiating mechanism for fatigue crack growth in this alloy. It is hypothesized that nucleation can be predicted by computing a non-local nucleation metric near the crack front. The hypothesis is tested by employing a combination of experimentation and nite element modeling in which various slip-based and energy-based nucleation metrics are tested for validity, where each metric is derived from a continuum crystal plasticity formulation. To investigate each metric, a non-local procedure is developed for the calculation of nucleation metrics in the neighborhood of a crack front. Initially, an idealized baseline model consisting of a single grain containing a semi-ellipsoidal surface particle is studied to investigate the dependence of each nucleation metric on lattice orientation, number of load cycles, and non-local regularization method. This is followed by a comparison of experimental observations and computational results for microstructural models constructed by replicating the observed microstructural geometry near second-phase particles in fatigue specimens. It is found that orientation strongly influences the direction of slip localization and, as a result, in uences the nucleation mechanism. Also, the baseline models, replication models, and past experimental observation consistently suggest that a set of particular grain orientations is most likely to nucleate fatigue cracks. It is found that a continuum crystal plasticity model and a non-local nucleation metric can be used to predict the nucleation event in AA 7075-T651. However, nucleation metric threshold values that correspond to various nucleation governing mechanisms must be calibrated.

  20. Models and metrics for software management and engineering

    NASA Technical Reports Server (NTRS)

    Basili, V. R.

    1988-01-01

    This paper attempts to characterize and present a state of the art view of several quantitative models and metrics of the software life cycle. These models and metrics can be used to aid in managing and engineering software projects. They deal with various aspects of the software process and product, including resources allocation and estimation, changes and errors, size, complexity and reliability. Some indication is given of the extent to which the various models have been used and the success they have achieved.

  1. 2008 GEM Modeling Challenge: Metrics Study of the Dst Index in Physics-Based Magnetosphere and Ring Current Models and in Statistical and Analytic Specifications

    NASA Technical Reports Server (NTRS)

    Rastaetter, L.; Kuznetsova, M.; Hesse, M.; Pulkkinen, A.; Glocer, A.; Yu, Y.; Meng, X.; Raeder, J.; Wiltberger, M.; Welling, D.; hide

    2011-01-01

    In this paper the metrics-based results of the Dst part of the 2008-2009 GEM Metrics Challenge are reported. The Metrics Challenge asked modelers to submit results for 4 geomagnetic storm events and 5 different types of observations that can be modeled by statistical or climatological or physics-based (e.g. MHD) models of the magnetosphere-ionosphere system. We present the results of over 25 model settings that were run at the Community Coordinated Modeling Center (CCMC) and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations we use comparisons of one-hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of one-minute model data with the one-minute Dst index calculated by the United States Geologic Survey (USGS).

  2. Trust in Anonymity Networks

    NASA Astrophysics Data System (ADS)

    Sassone, Vladimiro; Hamadou, Sardaouna; Yang, Mu

    Anonymity is a security property of paramount importance, as we move steadily towards a wired, online community. Its import touches upon subjects as different as eGovernance, eBusiness and eLeisure, as well as personal freedom of speech in authoritarian societies. Trust metrics are used in anonymity networks to support and enhance reliability in the absence of verifiable identities, and a variety of security attacks currently focus on degrading a user's trustworthiness in the eyes of the other users. In this paper, we analyse the privacy guarantees of the Crowds anonymity protocol, with and without onion forwarding, for standard and adaptive attacks against the trust level of honest users.

  3. Validation metrics for turbulent plasma transport

    DOE PAGES

    Holland, C.

    2016-06-22

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less

  4. Validation metrics for turbulent plasma transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, C.

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less

  5. Community Preparedness: Alternative Approaches to Citizen Engagement in Homeland Security

    DTIC Science & Technology

    2014-06-01

    and largely ignores the social aspects that influence an individual’s beliefs, attitudes and, behaviors.53 Self-efficacy is defined by Albert Bandura ...master’s thesis Naval Postgraduate School, Monterey, CA), 39–52 53 Ibid., 53. 54 Albert Bandura , “Self-efficacy: Toward a Unifying Theory of...27  B.  THEORIES / PRINCIPLES .........................................................................28  C.  METRICS

  6. Jordan Water Project: an interdisciplinary evaluation of freshwater vulnerability and security

    NASA Astrophysics Data System (ADS)

    Gorelick, S.; Yoon, J.; Rajsekhar, D.; Muller, M. F.; Zhang, H.; Gawel, E.; Klauer, B.; Klassert, C. J. A.; Sigel, K.; Thilmant, A.; Avisse, N.; Lachaut, T.; Harou, J. J.; Knox, S.; Selby, P. D.; Mustafa, D.; Talozi, S.; Haddad, Y.; Shamekh, M.

    2016-12-01

    The Jordan Water Project, part of the Belmont Forum projects, is an interdisciplinary, international research effort focused on evaluation of freshwater security in Jordan, one of the most water-vulnerable countries in the world. The team covers hydrology, water resources systems analysis, economics, policy evaluation, geography, risk and remote sensing analyses, and model platform development. The entire project team communally engaged in construction of an integrated hydroeconomic model for water supply policy evaluation. To represent water demand and allocation behavior at multiple levels of decision making,the model integrates biophysical modules that simulate natural and engineered hydrologic phenomena with human behavioral modules. Hydrologic modules include spatially-distributed groundwater and surface-water models for the major aquifers and watersheds throughout Jordan. For the human modules, we adopt a multi-agent modeling approach to represent decision-making processes. The integrated model was developed in Pynsim, a new open-source, object-oriented platform in Python for network-based water resource systems. We continue to explore the impacts of future scenarios and interventions.This project had tremendous encouragement and data support from Jordan's Ministry of Water and Irrigation. Modeling technology is being transferred through a companion NSF/USAID PEER project awarded toJordan University of Science and Technology. Individual teams have also conducted a range of studies aimed at evaluating Jordanian and transboundary surface water and groundwater systems. Surveys, interviews, and econometric analyses enabled us to better understandthe behavior of urban households, farmers, private water resellers, water use pattern of the commercial sector and irrigation water user associations. We analyzed nationwide spatial and temporal statistical trends in rainfall, developed urban and national comparative metrics to quantify water supply vulnerability, improved remote sensing methods to estimate crop-water use, and evaluated the impacts of climate change on future drought severity.

  7. Person Re-Identification via Distance Metric Learning With Latent Variables.

    PubMed

    Sun, Chong; Wang, Dong; Lu, Huchuan

    2017-01-01

    In this paper, we propose an effective person re-identification method with latent variables, which represents a pedestrian as the mixture of a holistic model and a number of flexible models. Three types of latent variables are introduced to model uncertain factors in the re-identification problem, including vertical misalignments, horizontal misalignments and leg posture variations. The distance between two pedestrians can be determined by minimizing a given distance function with respect to latent variables, and then be used to conduct the re-identification task. In addition, we develop a latent metric learning method for learning the effective metric matrix, which can be solved via an iterative manner: once latent information is specified, the metric matrix can be obtained based on some typical metric learning methods; with the computed metric matrix, the latent variables can be determined by searching the state space exhaustively. Finally, extensive experiments are conducted on seven databases to evaluate the proposed method. The experimental results demonstrate that our method achieves better performance than other competing algorithms.

  8. Using system-of-systems simulation modeling and analysis to measure energy KPP impacts for brigade combat team missions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawton, Craig R.; Welch, Kimberly M.; Kerper, Jessica

    2010-06-01

    The Department of Defense's (DoD) Energy Posture identified dependence of the US Military on fossil fuel energy as a key issue facing the military. Inefficient energy consumption leads to increased costs, effects operational performance and warfighter protection through large and vulnerable logistics support infrastructures. Military's use of energy is a critical national security problem. DoD's proposed metrics Fully Burdened Cost of Fuel and Energy Efficiency Key Performance Parameter (FBCF and Energy KPP) are a positive step to force energy use accountability onto Military programs. The ability to measure impacts of sustainment are required to fully measure Energy KPP. Sandia's workmore » with Army demonstrates the capability to measure performance which includes energy constraint.« less

  9. A Comprehensive Validation Methodology for Sparse Experimental Data

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  10. A Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions

    NASA Astrophysics Data System (ADS)

    Gide, Milind S.; Karam, Lina J.

    2016-08-01

    With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this work, we discuss shortcomings in existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a 5-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. Additionally, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark.

  11. Secure Wake-Up Scheme for WBANs

    NASA Astrophysics Data System (ADS)

    Liu, Jing-Wei; Ameen, Moshaddique Al; Kwak, Kyung-Sup

    Network life time and hence device life time is one of the fundamental metrics in wireless body area networks (WBAN). To prolong it, especially those of implanted sensors, each node must conserve its energy as much as possible. While a variety of wake-up/sleep mechanisms have been proposed, the wake-up radio potentially serves as a vehicle to introduce vulnerabilities and attacks to WBAN, eventually resulting in its malfunctions. In this paper, we propose a novel secure wake-up scheme, in which a wake-up authentication code (WAC) is employed to ensure that a BAN Node (BN) is woken up by the correct BAN Network Controller (BNC) rather than unintended users or malicious attackers. The scheme is thus particularly implemented by a two-radio architecture. We show that our scheme provides higher security while consuming less energy than the existing schemes.

  12. Updating stand-level forest inventories using airborne laser scanning and Landsat time series data

    NASA Astrophysics Data System (ADS)

    Bolton, Douglas K.; White, Joanne C.; Wulder, Michael A.; Coops, Nicholas C.; Hermosilla, Txomin; Yuan, Xiaoping

    2018-04-01

    Vertical forest structure can be mapped over large areas by combining samples of airborne laser scanning (ALS) data with wall-to-wall spatial data, such as Landsat imagery. Here, we use samples of ALS data and Landsat time-series metrics to produce estimates of top height, basal area, and net stem volume for two timber supply areas near Kamloops, British Columbia, Canada, using an imputation approach. Both single-year and time series metrics were calculated from annual, gap-free Landsat reflectance composites representing 1984-2014. Metrics included long-term means of vegetation indices, as well as measures of the variance and slope of the indices through time. Terrain metrics, generated from a 30 m digital elevation model, were also included as predictors. We found that imputation models improved with the inclusion of Landsat time series metrics when compared to single-year Landsat metrics (relative RMSE decreased from 22.8% to 16.5% for top height, from 32.1% to 23.3% for basal area, and from 45.6% to 34.1% for net stem volume). Landsat metrics that characterized 30-years of stand history resulted in more accurate models (for all three structural attributes) than Landsat metrics that characterized only the most recent 10 or 20 years of stand history. To test model transferability, we compared imputed attributes against ALS-based estimates in nearby forest blocks (>150,000 ha) that were not included in model training or testing. Landsat-imputed attributes correlated strongly to ALS-based estimates in these blocks (R2 = 0.62 and relative RMSE = 13.1% for top height, R2 = 0.75 and relative RMSE = 17.8% for basal area, and R2 = 0.67 and relative RMSE = 26.5% for net stem volume), indicating model transferability. These findings suggest that in areas containing spatially-limited ALS data acquisitions, imputation models, and Landsat time series and terrain metrics can be effectively used to produce wall-to-wall estimates of key inventory attributes, providing an opportunity to update estimates of forest attributes in areas where inventory information is either out of date or non-existent.

  13. App Usage Factor: A Simple Metric to Compare the Population Impact of Mobile Medical Apps.

    PubMed

    Lewis, Thomas Lorchan; Wyatt, Jeremy C

    2015-08-19

    One factor when assessing the quality of mobile apps is quantifying the impact of a given app on a population. There is currently no metric which can be used to compare the population impact of a mobile app across different health care disciplines. The objective of this study is to create a novel metric to characterize the impact of a mobile app on a population. We developed the simple novel metric, app usage factor (AUF), defined as the logarithm of the product of the number of active users of a mobile app with the median number of daily uses of the app. The behavior of this metric was modeled using simulated modeling in Python, a general-purpose programming language. Three simulations were conducted to explore the temporal and numerical stability of our metric and a simulated app ecosystem model using a simulated dataset of 20,000 apps. Simulations confirmed the metric was stable between predicted usage limits and remained stable at extremes of these limits. Analysis of a simulated dataset of 20,000 apps calculated an average value for the app usage factor of 4.90 (SD 0.78). A temporal simulation showed that the metric remained stable over time and suitable limits for its use were identified. A key component when assessing app risk and potential harm is understanding the potential population impact of each mobile app. Our metric has many potential uses for a wide range of stakeholders in the app ecosystem, including users, regulators, developers, and health care professionals. Furthermore, this metric forms part of the overall estimate of risk and potential for harm or benefit posed by a mobile medical app. We identify the merits and limitations of this metric, as well as potential avenues for future validation and research.

  14. Information Geometry for Landmark Shape Analysis: Unifying Shape Representation and Deformation

    PubMed Central

    Peter, Adrian M.; Rangarajan, Anand

    2010-01-01

    Shape matching plays a prominent role in the comparison of similar structures. We present a unifying framework for shape matching that uses mixture models to couple both the shape representation and deformation. The theoretical foundation is drawn from information geometry wherein information matrices are used to establish intrinsic distances between parametric densities. When a parameterized probability density function is used to represent a landmark-based shape, the modes of deformation are automatically established through the information matrix of the density. We first show that given two shapes parameterized by Gaussian mixture models (GMMs), the well-known Fisher information matrix of the mixture model is also a Riemannian metric (actually, the Fisher-Rao Riemannian metric) and can therefore be used for computing shape geodesics. The Fisher-Rao metric has the advantage of being an intrinsic metric and invariant to reparameterization. The geodesic—computed using this metric—establishes an intrinsic deformation between the shapes, thus unifying both shape representation and deformation. A fundamental drawback of the Fisher-Rao metric is that it is not available in closed form for the GMM. Consequently, shape comparisons are computationally very expensive. To address this, we develop a new Riemannian metric based on generalized ϕ-entropy measures. In sharp contrast to the Fisher-Rao metric, the new metric is available in closed form. Geodesic computations using the new metric are considerably more efficient. We validate the performance and discriminative capabilities of these new information geometry-based metrics by pairwise matching of corpus callosum shapes. We also study the deformations of fish shapes that have various topological properties. A comprehensive comparative analysis is also provided using other landmark-based distances, including the Hausdorff distance, the Procrustes metric, landmark-based diffeomorphisms, and the bending energies of the thin-plate (TPS) and Wendland splines. PMID:19110497

  15. Detection and quantification of flow consistency in business process models.

    PubMed

    Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel; Soffer, Pnina; Weber, Barbara

    2018-01-01

    Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance and the computational capabilities of our metrics are reported as well.

  16. Implementation and comparison of a suite of heat stress metrics within the Community Land Model version 4.5

    NASA Astrophysics Data System (ADS)

    Buzan, J. R.; Oleson, K.; Huber, M.

    2014-08-01

    We implement and analyze 13 different metrics (4 moist thermodynamic quantities and 9 heat stress metrics) in the Community Land Model (CLM4.5), the land surface component of the Community Earth System Model (CESM). We call these routines the HumanIndexMod. These heat stress metrics embody three philosophical approaches: comfort, physiology, and empirically based algorithms. The metrics are directly connected to CLM4.5 BareGroundFuxesMod, CanopyFluxesMod, SlakeFluxesMod, and UrbanMod modules in order to differentiate between the distinct regimes even within one gridcell. This allows CLM4.5 to calculate the instantaneous heat stress at every model time step, for every land surface type, capturing all aspects of non-linearity in moisture-temperature covariance. Secondary modules for initialization and archiving are modified to generate the metrics as standard output. All of the metrics implemented depend on the covariance of near surface atmospheric variables: temperature, pressure, and humidity. Accurate wet bulb temperatures are critical for quantifying heat stress (used by 5 of the 9 heat stress metrics). Unfortunately, moist thermodynamic calculations for calculating accurate wet bulb temperatures are not in CLM4.5. To remedy this, we incorporated comprehensive water vapor calculations into CLM4.5. The three advantages of adding these metrics to CLM4.5 are (1) improved thermodynamic calculations within climate models, (2) quantifying human heat stress, and (3) that these metrics may be applied to other animals as well as industrial applications. Additionally, an offline version of the HumanIndexMod is available for applications with weather and climate datasets. Examples of such applications are the high temporal resolution CMIP5 archived data, weather and research forecasting models, CLM4.5 flux tower simulations (or other land surface model validation studies), and local weather station data analysis. To demonstrate the capabilities of the HumanIndexMod, we analyze the top 1% of heat stress events from 1901-2010 at a 4 × daily resolution from a global CLM4.5 simulation. We cross compare these events to the input moisture and temperature conditions, and with each metric. Our results show that heat stress may be divided into two regimes: arid and non-arid. The highest heat stress values are in areas with strong convection (±30° latitude). Equatorial regions have low variability in heat stress values (±20° latitude). Arid regions have large variability in extreme heat stress as compared to the low latitudes.

  17. Hydrologic Model Development and Calibration: Contrasting a Single- and Multi-Objective Approach for Comparing Model Performance

    NASA Astrophysics Data System (ADS)

    Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.

    2009-05-01

    Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment Canada, is a coupled land-surface and hydrologic model. Results will demonstrate the conclusions a modeller might make regarding the value of additional watershed spatial discretization under both an aggregated (single-objective) and multi-objective model comparison framework.

  18. A neural net-based approach to software metrics

    NASA Technical Reports Server (NTRS)

    Boetticher, G.; Srinivas, Kankanahalli; Eichmann, David A.

    1992-01-01

    Software metrics provide an effective method for characterizing software. Metrics have traditionally been composed through the definition of an equation. This approach is limited by the fact that all the interrelationships among all the parameters be fully understood. This paper explores an alternative, neural network approach to modeling metrics. Experiments performed on two widely accepted metrics, McCabe and Halstead, indicate that the approach is sound, thus serving as the groundwork for further exploration into the analysis and design of software metrics.

  19. Estimation of the fraction of absorbed photosynthetically active radiation (fPAR) in maize canopies using LiDAR data and hyperspectral imagery.

    PubMed

    Qin, Haiming; Wang, Cheng; Zhao, Kaiguang; Xi, Xiaohuan

    2018-01-01

    Accurate estimation of the fraction of absorbed photosynthetically active radiation (fPAR) for maize canopies are important for maize growth monitoring and yield estimation. The goal of this study is to explore the potential of using airborne LiDAR and hyperspectral data to better estimate maize fPAR. This study focuses on estimating maize fPAR from (1) height and coverage metrics derived from airborne LiDAR point cloud data; (2) vegetation indices derived from hyperspectral imagery; and (3) a combination of these metrics. Pearson correlation analyses were conducted to evaluate the relationships among LiDAR metrics, hyperspectral metrics, and field-measured fPAR values. Then, multiple linear regression (MLR) models were developed using these metrics. Results showed that (1) LiDAR height and coverage metrics provided good explanatory power (i.e., R2 = 0.81); (2) hyperspectral vegetation indices provided moderate interpretability (i.e., R2 = 0.50); and (3) the combination of LiDAR metrics and hyperspectral metrics improved the LiDAR model (i.e., R2 = 0.88). These results indicate that LiDAR model seems to offer a reliable method for estimating maize fPAR at a high spatial resolution and it can be used for farmland management. Combining LiDAR and hyperspectral metrics led to better performance of maize fPAR estimation than LiDAR or hyperspectral metrics alone, which means that maize fPAR retrieval can benefit from the complementary nature of LiDAR-detected canopy structure characteristics and hyperspectral-captured vegetation spectral information.

  20. The field-space metric in spiral inflation and related models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erlich, Joshua; Olsen, Jackson; Wang, Zhen

    2016-09-22

    Multi-field inflation models include a variety of scenarios for how inflation proceeds and ends. Models with the same potential but different kinetic terms are common in the literature. We compare spiral inflation and Dante’s inferno-type models, which differ only in their field-space metric. We justify a single-field effective description in these models and relate the single-field description to a mass-matrix formalism. We note the effects of the nontrivial field-space metric on inflationary observables, and consequently on the viability of these models. We also note a duality between spiral inflation and Dante’s inferno models with different potentials.

  1. Process-oriented Observational Metrics for CMIP6 Climate Model Assessments

    NASA Astrophysics Data System (ADS)

    Jiang, J. H.; Su, H.

    2016-12-01

    Observational metrics based on satellite observations have been developed and effectively applied during post-CMIP5 model evaluation and improvement projects. As new physics and parameterizations continue to be included in models for the upcoming CMIP6, it is important to continue objective comparisons between observations and model results. This talk will summarize the process-oriented observational metrics and methodologies for constraining climate models with A-Train satellite observations and support CMIP6 model assessments. We target parameters and processes related to atmospheric clouds and water vapor, which are critically important for Earth's radiative budget, climate feedbacks, and water and energy cycles, and thus reduce uncertainties in climate models.

  2. Nuclear Security in the 21^st Century

    NASA Astrophysics Data System (ADS)

    Archer, Daniel E.

    2006-10-01

    Nuclear security has been a priority for the United States, starting in the 1940s with the secret cities of the Manhattan Project. In the 1970s, the United States placed radiation monitoring equipment at nuclear facilities to detect nuclear material diversion. Following the breakup of the Soviet Union, cooperative Russian/U.S. programs were launched in Russia to secure the estimated 600+ metric tons of fissionable materials against diversion (Materials Protection, Control, and Accountability -- MPC&A). Furthermore, separate programs were initiated to detect nuclear materials at the country's borders in the event that these materials had been stolen (Second Line of Defense - SLD). In the 2000s, new programs have been put in place in the United States for radiation detection, and research is being funded for more advanced systems. This talk will briefly touch on the history of nuclear security and then focus on some recent research efforts in radiation detection. Specifically, a new breed of radiation monitors will be examined along with the concept of sensor networks.

  3. Investigation of Two Models to Set and Evaluate Quality Targets for HbA1c: Biological Variation and Sigma-metrics

    PubMed Central

    Weykamp, Cas; John, Garry; Gillery, Philippe; English, Emma; Ji, Linong; Lenters-Westra, Erna; Little, Randie R.; Roglic, Gojka; Sacks, David B.; Takei, Izumi

    2016-01-01

    Background A major objective of the IFCC Task Force on implementation of HbA1c standardization is to develop a model to define quality targets for HbA1c. Methods Two generic models, the Biological Variation and Sigma-metrics model, are investigated. Variables in the models were selected for HbA1c and data of EQA/PT programs were used to evaluate the suitability of the models to set and evaluate quality targets within and between laboratories. Results In the biological variation model 48% of individual laboratories and none of the 26 instrument groups met the minimum performance criterion. In the Sigma-metrics model, with a total allowable error (TAE) set at 5 mmol/mol (0.46% NGSP) 77% of the individual laboratories and 12 of 26 instrument groups met the 2 sigma criterion. Conclusion The Biological Variation and Sigma-metrics model were demonstrated to be suitable for setting and evaluating quality targets within and between laboratories. The Sigma-metrics model is more flexible as both the TAE and the risk of failure can be adjusted to requirements related to e.g. use for diagnosis/monitoring or requirements of (inter)national authorities. With the aim of reaching international consensus on advice regarding quality targets for HbA1c, the Task Force suggests the Sigma-metrics model as the model of choice with default values of 5 mmol/mol (0.46%) for TAE, and risk levels of 2 and 4 sigma for routine laboratories and laboratories performing clinical trials, respectively. These goals should serve as a starting point for discussion with international stakeholders in the field of diabetes. PMID:25737535

  4. Localized Multi-Model Extremes Metrics for the Fourth National Climate Assessment

    NASA Astrophysics Data System (ADS)

    Thompson, T. R.; Kunkel, K.; Stevens, L. E.; Easterling, D. R.; Biard, J.; Sun, L.

    2017-12-01

    We have performed localized analysis of scenario-based datasets for the Fourth National Climate Assessment (NCA4). These datasets include CMIP5-based Localized Constructed Analogs (LOCA) downscaled simulations at daily temporal resolution and 1/16th-degree spatial resolution. Over 45 temperature and precipitation extremes metrics have been processed using LOCA data, including threshold, percentile, and degree-days calculations. The localized analysis calculates trends in the temperature and precipitation extremes metrics for relatively small regions such as counties, metropolitan areas, climate zones, administrative areas, or economic zones. For NCA4, we are currently addressing metropolitan areas as defined by U.S. Census Bureau Metropolitan Statistical Areas. Such localized analysis provides essential information for adaptation planning at scales relevant to local planning agencies and businesses. Nearly 30 such regions have been analyzed to date. Each locale is defined by a closed polygon that is used to extract LOCA-based extremes metrics specific to the area. For each metric, single-model data at each LOCA grid location are first averaged over several 30-year historical and future periods. Then, for each metric, the spatial average across the region is calculated using model weights based on both model independence and reproducibility of current climate conditions. The range of single-model results is also captured on the same localized basis, and then combined with the weighted ensemble average for each region and each metric. For example, Boston-area cooling degree days and maximum daily temperature is shown below for RCP8.5 (red) and RCP4.5 (blue) scenarios. We also discuss inter-regional comparison of these metrics, as well as their relevance to risk analysis for adaptation planning.

  5. Coverage Metrics for Model Checking

    NASA Technical Reports Server (NTRS)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  6. Counterterm counterexamples

    NASA Astrophysics Data System (ADS)

    Pope, C. N.; Sohnius, M. F.; Stelle, K. S.

    We show that, contrary to previous conjectures, there exist acceptable counterterms for Ricci-flat N = 1 and N = 2 super-symmetric σ-models. In the N = 1 case we present infinite sequences of counterterms, starting from the 7-loop order, that do not vanish for general riemannian Ricci-flat metrics but do vanish when the metric is also Kähler. We then investigate the counterterms for theories with Ricci-flat Kähler metrics (i.e. N = 2 models). Acceptable counterterms must vanish for hyper-Kähler metrics (the N = 4 case), and must respect the principle of universality; i.e. that counterterms to the metric can be expressed without the use of complex structures or other special tensors, which do not exist for general riemannian spaces. We show that a recently proposed 4-loop counterterm for the N = 2 models does indeed satisfy these two conditions, despite the apparent stringency of the universality principle. Hence the finiteness of Ricci-flat N = 1 and N = 2 supersymmetric σ-models seems unlikely to persist beyond the 3-loop order.

  7. Optimization of Regression Models of Experimental Data Using Confirmation Points

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2010-01-01

    A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.

  8. A Model and a Metric for the Analysis of Status Attainment Processes. Discussion Paper No. 492-78.

    ERIC Educational Resources Information Center

    Sorensen, Aage B.

    This paper proposes a theory of the status attainment process, and specifies it in a mathematical model. The theory justifies a transformation of the conventional status scores to a metric that produces a exponential distribution of attainments, and a transformation of educational attainments to a metric that reflects the competitive advantage…

  9. Performance assessment of geospatial simulation models of land-use change--a landscape metric-based approach.

    PubMed

    Sakieh, Yousef; Salmanmahiny, Abdolrassoul

    2016-03-01

    Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.

  10. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes.

    PubMed

    Kireeva, Natalia V; Ovchinnikova, Svetlana I; Kuznetsov, Sergey L; Kazennov, Andrey M; Tsivadze, Aslan Yu

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  11. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes

    NASA Astrophysics Data System (ADS)

    Kireeva, Natalia V.; Ovchinnikova, Svetlana I.; Kuznetsov, Sergey L.; Kazennov, Andrey M.; Tsivadze, Aslan Yu.

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  12. The data quality analyzer: a quality control program for seismic data

    USGS Publications Warehouse

    Ringler, Adam; Hagerty, M.T.; Holland, James F.; Gonzales, A.; Gee, Lind S.; Edwards, J.D.; Wilson, David; Baker, Adam

    2015-01-01

    The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a “grade” for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.

  13. 40 CFR 98.343 - Calculating GHG emissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... potential (metric tons CH4/metric ton waste) = MCF × DOC × DOCF × F × 16/12. MCF = Methane correction factor... = Methane emissions from the landfill in the reporting year (metric tons CH4). GCH 4 = Modeled methane...). Emissions = Methane emissions from the landfill in the reporting year (metric tons CH4). R = Quantity of...

  14. Validation of Metrics as Error Predictors

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.

  15. Decision-relevant evaluation of climate models: A case study of chill hours in California

    NASA Astrophysics Data System (ADS)

    Jagannathan, K. A.; Jones, A. D.; Kerr, A. C.

    2017-12-01

    The past decade has seen a proliferation of different climate datasets with over 60 climate models currently in use. Comparative evaluation and validation of models can assist practitioners chose the most appropriate models for adaptation planning. However, such assessments are usually conducted for `climate metrics' such as seasonal temperature, while sectoral decisions are often based on `decision-relevant outcome metrics' such as growing degree days or chill hours. Since climate models predict different metrics with varying skill, the goal of this research is to conduct a bottom-up evaluation of model skill for `outcome-based' metrics. Using chill hours (number of hours in winter months where temperature is lesser than 45 deg F) in Fresno, CA as a case, we assess how well different GCMs predict the historical mean and slope of chill hours, and whether and to what extent projections differ based on model selection. We then compare our results with other climate-based evaluations of the region, to identify similarities and differences. For the model skill evaluation, historically observed chill hours were compared with simulations from 27 GCMs (and multiple ensembles). Model skill scores were generated based on a statistical hypothesis test of the comparative assessment. Future projections from RCP 8.5 runs were evaluated, and a simple bias correction was also conducted. Our analysis indicates that model skill in predicting chill hour slope is dependent on its skill in predicting mean chill hours, which results from the non-linear nature of the chill metric. However, there was no clear relationship between the models that performed well for the chill hour metric and those that performed well in other temperature-based evaluations (such winter minimum temperature or diurnal temperature range). Further, contrary to conclusions from other studies, we also found that the multi-model mean or large ensemble mean results may not always be most appropriate for this outcome metric. Our assessment sheds light on key differences between global versus local skill, and broad versus specific skill of climate models, highlighting that decision-relevant model evaluation may be crucial for providing practitioners with the best available climate information for their specific needs.

  16. Measuring the Impact of Longitudinal Faculty Development: A Study of Academic Achievement.

    PubMed

    Newman, Lori R; Pelletier, Stephen R; Lown, Beth A

    2016-12-01

    Although faculty development programs in medical education have increased over the past two decades, there is a lack of rigorous program evaluation. The aim of this study was to determine quantifiable outcomes of Harvard Medical School's (HMS's) Fellowship in Medical Education and evaluate attainment of its goals. In 2005 and 2009 the authors collected curricula vitae (CVs) and conducted within-subject analysis of 42 fellowship graduates and also conducted comparison analysis between 12 academic year 2005 fellows and 12 faculty who did not participate in the program. The authors identified 10 metrics of academic advancement. CV analysis for the 42 graduates started 2 years prior to fellowship enrollment and continued for 2-year intervals until June 2009 (10 years of data collection). CV analysis for the comparison group was from 2003 to 2009. The authors also analyzed association between gender and academic outcomes. Fellowship graduates demonstrated significant changes in 4 of 10 academic metrics by the end of the fellowship year: academic promotion, educational leadership, education committees, and education funding. Two metrics-educational leadership and committees-showed increased outcomes two years post fellowship, with a positive trend for promotions. Fellowship graduates significantly outpaced the comparison group in 6 of 10 metrics. Women did significantly more committee work, secured more education funding, and were promoted more often than men. Findings indicate that the HMS Fellowship in Medical Education meets programmatic goals and produces positive, measurable academic outcomes. Standardized evaluation metrics of longitudinal faculty development programs would aid cross-institutional comparisons.

  17. Towards a Visual Quality Metric for Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1998-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  18. Validation metrics for turbulent plasma transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, C., E-mail: chholland@ucsd.edu

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. The utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak [J. L. Luxon, Nucl. Fusion 42, 614 (2002)], as part of a multi-year transport model validation activity.« less

  19. The Complexity Analysis Tool

    DTIC Science & Technology

    1988-10-01

    overview of the complexity analysis tool ( CAT ), an automated tool which will analyze mission critical computer resources (MCCR) software. CAT is based...84 MAR UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGE 19. ABSTRACT: (cont) CAT automates the metric for BASIC (HP-71), ATLAS (EQUATE), Ada (subset...UNIX 5.2). CAT analyzes source code and computes complexity on a module basis. CAT also generates graphic representations of the logic flow paths and

  20. A Paradigm for Security Metrics in Counterinsurgency

    DTIC Science & Technology

    2011-06-10

    unhealthy form of micromanagement.25 Both sides can cite examples to support their positions however a balanced position might incorporate considerations of...comprehensive assessments of a country or region. The framework includes a useful presentation of drivers of tipping points but does not use the... lifestyle . Native Algerians referred to these people as the black foot, or pieds noir in French. This group wielded disproportionate political power

  1. Determining Clinically Relevant Changes in Community Walking Metrics to Be Tracked by the VA as Part of Routine Care in Lower Limb Amputee Veterans

    DTIC Science & Technology

    2016-10-01

    total of 52 subjects have enrolled in the study: Veteran subjects n=34 and University of Utah subjects n= 18 . Preliminary analysis indicates that daily...relevant change 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18 . NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON USAMRMC a. REPORT

  2. Future of Military Health Care Final Report

    DTIC Science & Technology

    2007-12-20

    Population Health Navigator. Service programs are supported by the Military Health System Population Health Portal (MHSPHP), a centralized, secure...planning is due to Congress on March 1, 2008.66 64 Air Force Medical Support Agency, Population Health Support Division. MHS Population Health Portal ...MTFs are monitoring HEDIS metrics using the MHS Population Health Portal and reporting in the service systems and the Tri- Service Business Planning

  3. Satellite Data and Machine Learning for Weather Risk Management and Food Security.

    PubMed

    Biffis, Enrico; Chavez, Erik

    2017-08-01

    The increase in frequency and severity of extreme weather events poses challenges for the agricultural sector in developing economies and for food security globally. In this article, we demonstrate how machine learning can be used to mine satellite data and identify pixel-level optimal weather indices that can be used to inform the design of risk transfers and the quantification of the benefits of resilient production technology adoption. We implement the model to study maize production in Mozambique, and show how the approach can be used to produce countrywide risk profiles resulting from the aggregation of local, heterogeneous exposures to rainfall precipitation and excess temperature. We then develop a framework to quantify the economic gains from technology adoption by using insurance costs as the relevant metric, where insurance is broadly understood as the transfer of weather-driven crop losses to a dedicated facility. We consider the case of irrigation in detail, estimating a reduction in insurance costs of at least 30%, which is robust to different configurations of the model. The approach offers a robust framework to understand the costs versus benefits of investment in irrigation infrastructure, but could clearly be used to explore in detail the benefits of more advanced input packages, allowing, for example, for different crop varieties, sowing dates, or fertilizers. © 2017 Society for Risk Analysis.

  4. Validity of the two-level model for Viterbi decoder gap-cycle performance

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Arnold, S.

    1990-01-01

    A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.

  5. Improvements to the design process for a real-time passive millimeter-wave imager to be used for base security and helicopter navigation in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Anderton, Rupert N.; Cameron, Colin D.; Burnett, James G.; Güell, Jeff J.; Sanders-Reed, John N.

    2014-06-01

    This paper discusses the design of an improved passive millimeter wave imaging system intended to be used for base security in degraded visual environments. The discussion starts with the selection of the optimum frequency band. The trade-offs between requirements on detection, recognition and identification ranges and optical aperture are discussed with reference to the Johnson Criteria. It is shown that these requirements also affect image sampling, receiver numbers and noise temperature, frame rate, field of view, focusing requirements and mechanisms, and tolerance budgets. The effect of image quality degradation is evaluated and a single testable metric is derived that best describes the effects of degradation on meeting the requirements. The discussion is extended to tolerance budgeting constraints if significant degradation is to be avoided, including surface roughness, receiver position errors and scan conversion errors. Although the reflective twist-polarization imager design proposed is potentially relatively low cost and high performance, there is a significant problem with obscuration of the beam by the receiver array. Methods of modeling this accurately and thus designing for best performance are given.

  6. Validation Metrics for Improving Our Understanding of Turbulent Transport - Moving Beyond Proof by Pretty Picture and Loud Assertion

    NASA Astrophysics Data System (ADS)

    Holland, C.

    2013-10-01

    Developing validated models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. This tutorial will present an overview of the key guiding principles and practices for state-of-the-art validation studies, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. The primary focus of the talk will be the development of quantiatve validation metrics, which are essential for moving beyond qualitative and subjective assessments of model performance and fidelity. Particular emphasis and discussion is given to (i) the need for utilizing synthetic diagnostics to enable quantitatively meaningful comparisons between simulation and experiment, and (ii) the importance of robust uncertainty quantification and its inclusion within the metrics. To illustrate these concepts, we first review the structure and key insights gained from commonly used ``global'' transport model metrics (e.g. predictions of incremental stored energy or radially-averaged temperature), as well as their limitations. Building upon these results, a new form of turbulent transport metrics is then proposed, which focuses upon comparisons of predicted local gradients and fluctuation characteristics against observation. We demonstrate the utility of these metrics by applying them to simulations and modeling of a newly developed ``validation database'' derived from the results of a systematic, multi-year turbulent transport validation campaign on the DIII-D tokamak, in which comprehensive profile and fluctuation measurements have been obtained from a wide variety of heating and confinement scenarios. Finally, we discuss extensions of these metrics and their underlying design concepts to other areas of plasma confinement research, including both magnetohydrodynamic stability and integrated scenario modeling. Supported by the US DOE under DE-FG02-07ER54917 and DE-FC02-08ER54977.

  7. Prediction of Hydrologic Characteristics for Ungauged Catchments to Support Hydroecological Modeling

    NASA Astrophysics Data System (ADS)

    Bond, Nick R.; Kennard, Mark J.

    2017-11-01

    Hydrologic variability is a fundamental driver of ecological processes and species distribution patterns within river systems, yet the paucity of gauges in many catchments means that streamflow data are often unavailable for ecological survey sites. Filling this data gap is an important challenge in hydroecological research. To address this gap, we first test the ability to spatially extrapolate hydrologic metrics calculated from gauged streamflow data to ungauged sites as a function of stream distance and catchment area. Second, we examine the ability of statistical models to predict flow regime metrics based on climate and catchment physiographic variables. Our assessment focused on Australia's largest catchment, the Murray-Darling Basin (MDB). We found that hydrologic metrics were predictable only between sites within ˜25 km of one another. Beyond this, correlations between sites declined quickly. We found less than 40% of fish survey sites from a recent basin-wide monitoring program (n = 777 sites) to fall within this 25 km range, thereby greatly limiting the ability to utilize gauge data for direct spatial transposition of hydrologic metrics to biological survey sites. In contrast, statistical model-based transposition proved effective in predicting ecologically relevant aspects of the flow regime (including metrics describing central tendency, high- and low-flows intermittency, seasonality, and variability) across the entire gauge network (median R2 ˜ 0.54, range 0.39-0.94). Modeled hydrologic metrics thus offer a useful alternative to empirical data when examining biological survey data from ungauged sites. More widespread use of these statistical tools and modeled metrics could expand our understanding of flow-ecology relationships.

  8. ARM Data-Oriented Metrics and Diagnostics Package for Climate Model Evaluation Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chengzhu; Xie, Shaocheng

    A Python-based metrics and diagnostics package is currently being developed by the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Infrastructure Team at Lawrence Livermore National Laboratory (LLNL) to facilitate the use of long-term, high-frequency measurements from the ARM Facility in evaluating the regional climate simulation of clouds, radiation, and precipitation. This metrics and diagnostics package computes climatological means of targeted climate model simulation and generates tables and plots for comparing the model simulation with ARM observational data. The Coupled Model Intercomparison Project (CMIP) model data sets are also included in the package to enable model intercomparison as demonstratedmore » in Zhang et al. (2017). The mean of the CMIP model can serve as a reference for individual models. Basic performance metrics are computed to measure the accuracy of mean state and variability of climate models. The evaluated physical quantities include cloud fraction, temperature, relative humidity, cloud liquid water path, total column water vapor, precipitation, sensible and latent heat fluxes, and radiative fluxes, with plan to extend to more fields, such as aerosol and microphysics properties. Process-oriented diagnostics focusing on individual cloud- and precipitation-related phenomena are also being developed for the evaluation and development of specific model physical parameterizations. The version 1.0 package is designed based on data collected at ARM’s Southern Great Plains (SGP) Research Facility, with the plan to extend to other ARM sites. The metrics and diagnostics package is currently built upon standard Python libraries and additional Python packages developed by DOE (such as CDMS and CDAT). The ARM metrics and diagnostic package is available publicly with the hope that it can serve as an easy entry point for climate modelers to compare their models with ARM data. In this report, we first present the input data, which constitutes the core content of the metrics and diagnostics package in section 2, and a user's guide documenting the workflow/structure of the version 1.0 codes, and including step-by-step instruction for running the package in section 3.« less

  9. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven Karl

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  10. Mainstreaming nutrition metrics in household surveys--toward a multidisciplinary convergence of data systems.

    PubMed

    Pingali, Prabhu L; Ricketts, Katie D

    2014-12-01

    Since the 2008 food price crisis, food and nutrition security are back on the global development agenda, with particular emphasis on agricultural pathways toward improved nutrition. Parallel efforts are being promoted to improve the data and metrics for monitoring progress toward positive nutritional outcomes, especially for women and children. Despite the increased investment in tracking nutritional outcomes, these efforts are often made in silos, which create challenges for integrating nutritional data with other sectoral data, such as those related to agriculture. This paper proposes a minimum set of nutrition indicators to be included in nationally representative agricultural (and multitopic) household surveys. Building multisectoral convergence across existing surveys will allow us to better identify priority interventions and to monitor progress toward improved nutrition targets. © 2014 New York Academy of Sciences.

  11. Rank Order Entropy: why one metric is not enough

    PubMed Central

    McLellan, Margaret R.; Ryan, M. Dominic; Breneman, Curt M.

    2011-01-01

    The use of Quantitative Structure-Activity Relationship models to address problems in drug discovery has a mixed history, generally resulting from the mis-application of QSAR models that were either poorly constructed or used outside of their domains of applicability. This situation has motivated the development of a variety of model performance metrics (r2, PRESS r2, F-tests, etc) designed to increase user confidence in the validity of QSAR predictions. In a typical workflow scenario, QSAR models are created and validated on training sets of molecules using metrics such as Leave-One-Out or many-fold cross-validation methods that attempt to assess their internal consistency. However, few current validation methods are designed to directly address the stability of QSAR predictions in response to changes in the information content of the training set. Since the main purpose of QSAR is to quickly and accurately estimate a property of interest for an untested set of molecules, it makes sense to have a means at hand to correctly set user expectations of model performance. In fact, the numerical value of a molecular prediction is often less important to the end user than knowing the rank order of that set of molecules according to their predicted endpoint values. Consequently, a means for characterizing the stability of predicted rank order is an important component of predictive QSAR. Unfortunately, none of the many validation metrics currently available directly measure the stability of rank order prediction, making the development of an additional metric that can quantify model stability a high priority. To address this need, this work examines the stabilities of QSAR rank order models created from representative data sets, descriptor sets, and modeling methods that were then assessed using Kendall Tau as a rank order metric, upon which the Shannon Entropy was evaluated as a means of quantifying rank-order stability. Random removal of data from the training set, also known as Data Truncation Analysis (DTA), was used as a means for systematically reducing the information content of each training set while examining both rank order performance and rank order stability in the face of training set data loss. The premise for DTA ROE model evaluation is that the response of a model to incremental loss of training information will be indicative of the quality and sufficiency of its training set, learning method, and descriptor types to cover a particular domain of applicability. This process is termed a “rank order entropy” evaluation, or ROE. By analogy with information theory, an unstable rank order model displays a high level of implicit entropy, while a QSAR rank order model which remains nearly unchanged during training set reductions would show low entropy. In this work, the ROE metric was applied to 71 data sets of different sizes, and was found to reveal more information about the behavior of the models than traditional metrics alone. Stable, or consistently performing models, did not necessarily predict rank order well. Models that performed well in rank order did not necessarily perform well in traditional metrics. In the end, it was shown that ROE metrics suggested that some QSAR models that are typically used should be discarded. ROE evaluation helps to discern which combinations of data set, descriptor set, and modeling methods lead to usable models in prioritization schemes, and provides confidence in the use of a particular model within a specific domain of applicability. PMID:21875058

  12. Effects of metric hierarchy and rhyme predictability on word duration in The Cat in the Hat.

    PubMed

    Breen, Mara

    2018-05-01

    Word durations convey many types of linguistic information, including intrinsic lexical features like length and frequency and contextual features like syntactic and semantic structure. The current study was designed to investigate whether hierarchical metric structure and rhyme predictability account for durational variation over and above other features in productions of a rhyming, metrically-regular children's book: The Cat in the Hat (Dr. Seuss, 1957). One-syllable word durations and inter-onset intervals were modeled as functions of segment number, lexical frequency, word class, syntactic structure, repetition, and font emphasis. Consistent with prior work, factors predicting longer word durations and inter-onset intervals included more phonemes, lower frequency, first mention, alignment with a syntactic boundary, and capitalization. A model parameter corresponding to metric grid height improved model fit of word durations and inter-onset intervals. Specifically, speakers realized five levels of metric hierarchy with inter-onset intervals such that interval duration increased linearly with increased height in the metric hierarchy. Conversely, speakers realized only three levels of metric hierarchy with word duration, demonstrating that they shortened the highly predictable rhyme resolutions. These results further understanding of the factors that affect spoken word duration, and demonstrate the myriad cues that children receive about linguistic structure from nursery rhymes. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Cyber-Physical Security Assessment (CyPSA) Toolset

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Luis; Patapanchala, Panini; Zonouz, Saman

    CyPSA seeks to organize and gain insight into the diverse sets of data that a critical infrastructure provider must manage. Specifically CyPSA inventories, manages, and analyzes assets and relations among those assets. A variety of interfaces are provided. CyPSA inventories assets (both cyber and physical). This may include the cataloging of assets through a common interface. Data sources used to generate a catalogue of assets include PowerWorld, NPView, NMap Scans, and device configurations. Depending upon the role of the person using the tool the types of assets accessed as well as the data sources through which asset information is accessedmore » may vary. CyPSA allows practitioners to catalogue relations among assets and these may either be manually or programmatically generated. For example, some common relations among assets include the following: Topological Network Data: Which devices and assets are connected and how? Data sources for this kind of information include NMap scans, NPView topologies (via Firewall rule analysis). Security Metrics Outputs: The output of various security metrics such as overall exposure. Configure Assets:CyPSA may eventually include the ability to configure assets including relays and switches. For example, a system administrator would be able to configure and alter the state of a relay via the CyPSA interface. Annotate Assets: CyPSA also allows practitioners to manually and programmatically annotate assets. Sources of information with which to annotate assets include provenance metadata regarding the data source from which the asset was loaded, vulnerability information from vulnerability databases, configuration information, and the output of an analysis in general.« less

  14. Using a safety forecast model to calculate future safety metrics.

    DOT National Transportation Integrated Search

    2017-05-01

    This research sought to identify a process to improve long-range planning prioritization by using forecasted : safety metrics in place of the existing Utah Department of Transportation Safety Indexa metric based on historical : crash data. The res...

  15. A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output

    PubMed Central

    Stevanovic, Stefan; Pervan, Boris

    2018-01-01

    We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250

  16. Using Geometry-Based Metrics as Part of Fitness-for-Purpose Evaluations of 3D City Models

    NASA Astrophysics Data System (ADS)

    Wong, K.; Ellul, C.

    2016-10-01

    Three-dimensional geospatial information is being increasingly used in a range of tasks beyond visualisation. 3D datasets, however, are often being produced without exact specifications and at mixed levels of geometric complexity. This leads to variations within the models' geometric and semantic complexity as well as the degree of deviation from the corresponding real world objects. Existing descriptors and measures of 3D data such as CityGML's level of detail are perhaps only partially sufficient in communicating data quality and fitness-for-purpose. This study investigates whether alternative, automated, geometry-based metrics describing the variation of complexity within 3D datasets could provide additional relevant information as part of a process of fitness-for-purpose evaluation. The metrics include: mean vertex/edge/face counts per building; vertex/face ratio; minimum 2D footprint area and; minimum feature length. Each metric was tested on six 3D city models from international locations. The results show that geometry-based metrics can provide additional information on 3D city models as part of fitness-for-purpose evaluations. The metrics, while they cannot be used in isolation, may provide a complement to enhance existing data descriptors if backed up with local knowledge, where possible.

  17. Stability and Performance Metrics for Adaptive Flight Control

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmanje; Nguyen, Nhan; VanEykeren, Luarens

    2009-01-01

    This paper addresses the problem of verifying adaptive control techniques for enabling safe flight in the presence of adverse conditions. Since the adaptive systems are non-linear by design, the existing control verification metrics are not applicable to adaptive controllers. Moreover, these systems are in general highly uncertain. Hence, the system's characteristics cannot be evaluated by relying on the available dynamical models. This necessitates the development of control verification metrics based on the system's input-output information. For this point of view, a set of metrics is introduced that compares the uncertain aircraft's input-output behavior under the action of an adaptive controller to that of a closed-loop linear reference model to be followed by the aircraft. This reference model is constructed for each specific maneuver using the exact aerodynamic and mass properties of the aircraft to meet the stability and performance requirements commonly accepted in flight control. The proposed metrics are unified in the sense that they are model independent and not restricted to any specific adaptive control methods. As an example, we present simulation results for a wing damaged generic transport aircraft with several existing adaptive controllers.

  18. New t-gap insertion-deletion-like metrics for DNA hybridization thermodynamic modeling.

    PubMed

    D'yachkov, Arkadii G; Macula, Anthony J; Pogozelski, Wendy K; Renz, Thomas E; Rykov, Vyacheslav V; Torney, David C

    2006-05-01

    We discuss the concept of t-gap block isomorphic subsequences and use it to describe new abstract string metrics that are similar to the Levenshtein insertion-deletion metric. Some of the metrics that we define can be used to model a thermodynamic distance function on single-stranded DNA sequences. Our model captures a key aspect of the nearest neighbor thermodynamic model for hybridized DNA duplexes. One version of our metric gives the maximum number of stacked pairs of hydrogen bonded nucleotide base pairs that can be present in any secondary structure in a hybridized DNA duplex without pseudoknots. Thermodynamic distance functions are important components in the construction of DNA codes, and DNA codes are important components in biomolecular computing, nanotechnology, and other biotechnical applications that employ DNA hybridization assays. We show how our new distances can be calculated by using a dynamic programming method, and we derive a Varshamov-Gilbert-like lower bound on the size of some of codes using these distance functions as constraints. We also discuss software implementation of our DNA code design methods.

  19. App Usage Factor: A Simple Metric to Compare the Population Impact of Mobile Medical Apps

    PubMed Central

    Wyatt, Jeremy C

    2015-01-01

    Background One factor when assessing the quality of mobile apps is quantifying the impact of a given app on a population. There is currently no metric which can be used to compare the population impact of a mobile app across different health care disciplines. Objective The objective of this study is to create a novel metric to characterize the impact of a mobile app on a population. Methods We developed the simple novel metric, app usage factor (AUF), defined as the logarithm of the product of the number of active users of a mobile app with the median number of daily uses of the app. The behavior of this metric was modeled using simulated modeling in Python, a general-purpose programming language. Three simulations were conducted to explore the temporal and numerical stability of our metric and a simulated app ecosystem model using a simulated dataset of 20,000 apps. Results Simulations confirmed the metric was stable between predicted usage limits and remained stable at extremes of these limits. Analysis of a simulated dataset of 20,000 apps calculated an average value for the app usage factor of 4.90 (SD 0.78). A temporal simulation showed that the metric remained stable over time and suitable limits for its use were identified. Conclusions A key component when assessing app risk and potential harm is understanding the potential population impact of each mobile app. Our metric has many potential uses for a wide range of stakeholders in the app ecosystem, including users, regulators, developers, and health care professionals. Furthermore, this metric forms part of the overall estimate of risk and potential for harm or benefit posed by a mobile medical app. We identify the merits and limitations of this metric, as well as potential avenues for future validation and research. PMID:26290093

  20. The data quality analyzer: A quality control program for seismic data

    NASA Astrophysics Data System (ADS)

    Ringler, A. T.; Hagerty, M. T.; Holland, J.; Gonzales, A.; Gee, L. S.; Edwards, J. D.; Wilson, D.; Baker, A. M.

    2015-03-01

    The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several initiatives underway to enhance and track the quality of data produced from ASL seismic stations and to improve communication about data problems to the user community. The Data Quality Analyzer (DQA) is one such development and is designed to characterize seismic station data quality in a quantitative and automated manner. The DQA consists of a metric calculator, a PostgreSQL database, and a Web interface: The metric calculator, SEEDscan, is a Java application that reads and processes miniSEED data and generates metrics based on a configuration file. SEEDscan compares hashes of metadata and data to detect changes in either and performs subsequent recalculations as needed. This ensures that the metric values are up to date and accurate. SEEDscan can be run as a scheduled task or on demand. The PostgreSQL database acts as a central hub where metric values and limited station descriptions are stored at the channel level with one-day granularity. The Web interface dynamically loads station data from the database and allows the user to make requests for time periods of interest, review specific networks and stations, plot metrics as a function of time, and adjust the contribution of various metrics to the overall quality grade of the station. The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a "grade" for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.

  1. An Introduction to the SI Metric System. Inservice Guide for Teaching Measurement, Kindergarten Through Grade Eight.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento.

    This handbook was designed to serve as a reference for teacher workshops that: (1) introduce the metric system and help teachers gain confidence with metric measurement, and (2) develop classroom measurement activities. One chapter presents the history and basic features of SI metrics. A second chapter presents a model for the measurement program.…

  2. An annual plant growth proxy in the Mojave Desert using MODIS-EVI data

    USGS Publications Warehouse

    Wallace, C.S.A.; Thomas, K.A.

    2008-01-01

    In the arid Mojave Desert, the phenological response of vegetation is largely dependent upon the timing and amount of rainfall, and maps of annual plant cover at any one point in time can vary widely. Our study developed relative annual plant growth models as proxies for annual plant cover using metrics that captured phenological variability in Moderate-Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) satellite images. We used landscape phenologies revealed in MODIS data together with ecological knowledge of annual plant seasonality to develop a suite of metrics to describe annual growth on a yearly basis. Each of these metrics was applied to temporally-composited MODIS-EVI images to develop a relative model of annual growth. Each model was evaluated by testing how well it predicted field estimates of annual cover collected during 2003 and 2005 at the Mojave National Preserve. The best performing metric was the spring difference metric, which compared the average of three spring MODIS-EVI composites of a given year to that of 2002, a year of record drought. The spring difference metric showed correlations with annual plant cover of R2 = 0.61 for 2005 and R 2 = 0.47 for 2003. Although the correlation is moderate, we consider it supportive given the characteristics of the field data, which were collected for a different study in a localized area and are not ideal for calibration to MODIS pixels. A proxy for annual growth potential was developed from the spring difference metric of 2005 for use as an environmental data layer in desert tortoise habitat modeling. The application of the spring difference metric to other imagery years presents potential for other applications such as fuels, invasive species, and dust-emission monitoring in the Mojave Desert.

  3. An Annual Plant Growth Proxy in the Mojave Desert Using MODIS-EVI Data.

    PubMed

    Wallace, Cynthia S A; Thomas, Kathryn A

    2008-12-03

    In the arid Mojave Desert, the phenological response of vegetation is largely dependent upon the timing and amount of rainfall, and maps of annual plant cover at any one point in time can vary widely. Our study developed relative annual plant growth models as proxies for annual plant cover using metrics that captured phenological variability in Moderate-Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) satellite images. We used landscape phenologies revealed in MODIS data together with ecological knowledge of annual plant seasonality to develop a suite of metrics to describe annual growth on a yearly basis. Each of these metrics was applied to temporally-composited MODIS-EVI images to develop a relative model of annual growth. Each model was evaluated by testing how well it predicted field estimates of annual cover collected during 2003 and 2005 at the Mojave National Preserve. The best performing metric was the spring difference metric, which compared the average of three spring MODIS-EVI composites of a given year to that of 2002, a year of record drought. The spring difference metric showed correlations with annual plant cover of R² = 0.61 for 2005 and R² = 0.47 for 2003. Although the correlation is moderate, we consider it supportive given the characteristics of the field data, which were collected for a different study in a localized area and are not ideal for calibration to MODIS pixels. A proxy for annual growth potential was developed from the spring difference metric of 2005 for use as an environmental data layer in desert tortoise habitat modeling. The application of the spring difference metric to other imagery years presents potential for other applications such as fuels, invasive species, and dust-emission monitoring in the Mojave Desert.

  4. Quality metrics for sensor images

    NASA Technical Reports Server (NTRS)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.

  5. Multi-Dimensional Calibration of Impact Dynamic Models

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.

    2011-01-01

    NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.

  6. Adjustment of Adaptive Gain with Bounded Linear Stability Analysis to Improve Time-Delay Margin for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.

  7. Static models with conformal symmetry

    NASA Astrophysics Data System (ADS)

    Manjonjo, A. M.; Maharaj, S. D.; Moopanar, S.

    2018-02-01

    We study static spherically symmetric spacetimes with a spherical conformal symmetry and a nonstatic conformal factor associated with the conformal Killing field. With these assumptions we find an explicit relationship relating two metric components of the metric tensor field. This leads to the general solution of the Einstein field equations with a conformal symmetry in a static spherically symmetric spacetime. For perfect fluids we can find all metrics explicitly and show that the models always admit a barotropic equation of state. Contained within this class of spacetimes are the well known metrics of (interior) Schwarzschild, Tolman, Kuchowicz, Korkina and Orlyanskii, Patwardhan and Vaidya, and Buchdahl and Land. The isothermal metric of Saslaw et al also admits a conformal symmetry. For imperfect fluids an infinite family of exact solutions to the field equations can be generated.

  8. Metric-driven harm: an exploration of unintended consequences of performance measurement.

    PubMed

    Rambur, Betty; Vallett, Carol; Cohen, Judith A; Tarule, Jill Mattuck

    2013-11-01

    Performance measurement is an increasingly common element of the US health care system. Typically a proxy for high quality outcomes, there has been little systematic investigation of the potential negative unintended consequences of performance metrics, including metric-driven harm. This case study details an incidence of post-surgical metric-driven harm and offers Smith's 1995 work and a patient centered, context sensitive metric model for potential adoption by nurse researchers and clinicians. Implications for further research are discussed. © 2013.

  9. Comparison of watershed disturbance predictive models for stream benthic macroinvertebrates for three distinct ecoregions in western US

    USGS Publications Warehouse

    Waite, Ian R.; Brown, Larry R.; Kennen, Jonathan G.; May, Jason T.; Cuffney, Thomas F.; Orlando, James L.; Jones, Kimberly A.

    2010-01-01

    The successful use of macroinvertebrates as indicators of stream condition in bioassessments has led to heightened interest throughout the scientific community in the prediction of stream condition. For example, predictive models are increasingly being developed that use measures of watershed disturbance, including urban and agricultural land-use, as explanatory variables to predict various metrics of biological condition such as richness, tolerance, percent predators, index of biotic integrity, functional species traits, or even ordination axes scores. Our primary intent was to determine if effective models could be developed using watershed characteristics of disturbance to predict macroinvertebrate metrics among disparate and widely separated ecoregions. We aggregated macroinvertebrate data from universities and state and federal agencies in order to assemble stream data sets of high enough density appropriate for modeling in three distinct ecoregions in Oregon and California. Extensive review and quality assurance of macroinvertebrate sampling protocols, laboratory subsample counts and taxonomic resolution was completed to assure data comparability. We used widely available digital coverages of land-use and land-cover data summarized at the watershed and riparian scale as explanatory variables to predict macroinvertebrate metrics commonly used by state resource managers to assess stream condition. The “best” multiple linear regression models from each region required only two or three explanatory variables to model macroinvertebrate metrics and explained 41–74% of the variation. In each region the best model contained some measure of urban and/or agricultural land-use, yet often the model was improved by including a natural explanatory variable such as mean annual precipitation or mean watershed slope. Two macroinvertebrate metrics were common among all three regions, the metric that summarizes the richness of tolerant macroinvertebrates (RICHTOL) and some form of EPT (Ephemeroptera, Plecoptera, and Trichoptera) richness. Best models were developed for the same two invertebrate metrics even though the geographic regions reflect distinct differences in precipitation, geology, elevation, slope, population density, and land-use. With further development, models like these can be used to elicit better causal linkages to stream biological attributes or condition and can be used by researchers or managers to predict biological indicators of stream condition at unsampled sites.

  10. When to Make Mountains out of Molehills: The Pros and Cons of Simple and Complex Model Calibration Procedures

    NASA Astrophysics Data System (ADS)

    Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.

    2017-12-01

    Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.

  11. Distributed intelligent monitoring and reporting facilities

    NASA Astrophysics Data System (ADS)

    Pavlou, George; Mykoniatis, George; Sanchez-P, Jorge-A.

    1996-06-01

    Distributed intelligent monitoring and reporting facilities are of paramount importance in both service and network management as they provide the capability to monitor quality of service and utilization parameters and notify degradation so that corrective action can be taken. By intelligent, we refer to the capability of performing the monitoring tasks in a way that has the smallest possible impact on the managed network, facilitates the observation and summarization of information according to a number of criteria and in its most advanced form and permits the specification of these criteria dynamically to suit the particular policy in hand. In addition, intelligent monitoring facilities should minimize the design and implementation effort involved in such activities. The ISO/ITU Metric, Summarization and Performance management functions provide models that only partially satisfy the above requirements. This paper describes our extensions to the proposed models to support further capabilities, with the intention to eventually lead to fully dynamically defined monitoring policies. The concept of distributing intelligence is also discussed, including the consideration of security issues and the applicability of the model in ODP-based distributed processing environments.

  12. Assessing Security Cooperation: Improving Methods to Maximize Effects

    DTIC Science & Technology

    2013-03-01

    Carl Builder notes, “Institutional and personal interest are not intrinsically bad; but they may be made so if they are always cloaked in altruism and...cooperation professional may be able to lay the groundwork for validating or disqualifying actions based on both countable metrics and anecdotal...Pre-decisional Working Draft), (Washington DC: Multi-Agency, July 31, 2012), 37. 6 Carl H. Builder , The Masks of War: American Military Styles in

  13. Creating a National Framework for Cybersecurity: An Analysis of Issues and Options

    DTIC Science & Technology

    2005-02-22

    of those measures; and the associated field of professional endeavor. Virtually any element of cyberspace can be at risk , and the degree of...weaknesses in U.S. cybersecurity is an area of some controversy. However, some components appear to be sources of potentially significant risk because either...security into enterprise architecture, using risk management, and using metrics. These different approaches all have different strengths and weaknesses

  14. Task Force on the Future of Military Health Care

    DTIC Science & Technology

    2007-12-01

    Navigator. Service programs are supported by the Military Health System Population Health Portal (MHSPHP), a centralized, secure, web-based population...Congress on March 1, 2008.66 64 Air Force Medical Support Agency, Population Health Support Division. MHS Population Health Portal Methods. July 2007...HEDIS metrics using the MHS Population Health Portal and reporting in the service systems and the Tri- Service Business Planning tool. DoD has several

  15. Grid Frequency Extreme Event Analysis and Modeling: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Florita, Anthony R; Clark, Kara; Gevorgian, Vahan

    Sudden losses of generation or load can lead to instantaneous changes in electric grid frequency and voltage. Extreme frequency events pose a major threat to grid stability. As renewable energy sources supply power to grids in increasing proportions, it becomes increasingly important to examine when and why extreme events occur to prevent destabilization of the grid. To better understand frequency events, including extrema, historic data were analyzed to fit probability distribution functions to various frequency metrics. Results showed that a standard Cauchy distribution fit the difference between the frequency nadir and prefault frequency (f_(C-A)) metric well, a standard Cauchy distributionmore » fit the settling frequency (f_B) metric well, and a standard normal distribution fit the difference between the settling frequency and frequency nadir (f_(B-C)) metric very well. Results were inconclusive for the frequency nadir (f_C) metric, meaning it likely has a more complex distribution than those tested. This probabilistic modeling should facilitate more realistic modeling of grid faults.« less

  16. On Rosen's theory of gravity and cosmology

    NASA Technical Reports Server (NTRS)

    Barnes, R. C.

    1980-01-01

    Formal similarities between general relativity and Rosen's bimetric theory of gravity were used to analyze various bimetric cosmologies. The following results were found: (1) physically plausible model universes which have a flat static background metric, have a Robertson-Walker fundamental metric, and which allow co-moving coordinates do not exist in bimetric cosmology. (2) it is difficult to use the Robertson-Walker metric for both the background metric (gamma mu nu) and the fundamental metric tensor of Riemannian geometry( g mu nu) and require that g mu nu and gamma mu nu have different time dependences. (3) A consistency relation for using co-moving coordinates in bimetric cosmology was derived. (4) Certain spatially flat bimetric cosmologies of Babala were tested for the presence of particle horizons. (5) An analytic solution for Rosen's k = +1 model was found. (6) Rosen's singularity free k = +1 model arises from what appears to be an arbitary choice for the time dependent part of gamma mu nu.

  17. Automation of Ocean Product Metrics

    DTIC Science & Technology

    2008-09-30

    Presented in: Ocean Sciences 2008 Conf., 5 Mar 2008. Shriver, J., J. D. Dykes, and J. Fabre: Automation of Operational Ocean Product Metrics. Presented in 2008 EGU General Assembly , 14 April 2008. 9 ...processing (multiple data cuts per day) and multiple-nested models. Routines for generating automated evaluations of model forecast statistics will be...developed and pre-existing tools will be collected to create a generalized tool set, which will include user-interface tools to the metrics data

  18. Ability of LANDSAT-8 Oli Derived Texture Metrics in Estimating Aboveground Carbon Stocks of Coppice Oak Forests

    NASA Astrophysics Data System (ADS)

    Safari, A.; Sohrabi, H.

    2016-06-01

    The role of forests as a reservoir for carbon has prompted the need for timely and reliable estimation of aboveground carbon stocks. Since measurement of aboveground carbon stocks of forests is a destructive, costly and time-consuming activity, aerial and satellite remote sensing techniques have gained many attentions in this field. Despite the fact that using aerial data for predicting aboveground carbon stocks has been proved as a highly accurate method, there are challenges related to high acquisition costs, small area coverage, and limited availability of these data. These challenges are more critical for non-commercial forests located in low-income countries. Landsat program provides repetitive acquisition of high-resolution multispectral data, which are freely available. The aim of this study was to assess the potential of multispectral Landsat 8 Operational Land Imager (OLI) derived texture metrics in quantifying aboveground carbon stocks of coppice Oak forests in Zagros Mountains, Iran. We used four different window sizes (3×3, 5×5, 7×7, and 9×9), and four different offsets ([0,1], [1,1], [1,0], and [1,-1]) to derive nine texture metrics (angular second moment, contrast, correlation, dissimilar, entropy, homogeneity, inverse difference, mean, and variance) from four bands (blue, green, red, and infrared). Totally, 124 sample plots in two different forests were measured and carbon was calculated using species-specific allometric models. Stepwise regression analysis was applied to estimate biomass from derived metrics. Results showed that, in general, larger size of window for deriving texture metrics resulted models with better fitting parameters. In addition, the correlation of the spectral bands for deriving texture metrics in regression models was ranked as b4>b3>b2>b5. The best offset was [1,-1]. Amongst the different metrics, mean and entropy were entered in most of the regression models. Overall, different models based on derived texture metrics were able to explain about half of the variation in aboveground carbon stocks. These results demonstrated that Landsat 8 derived texture metrics can be applied for mapping aboveground carbon stocks of coppice Oak Forests in large areas.

  19. Resistance and Security Index of Networks: Structural Information Perspective of Network Security

    NASA Astrophysics Data System (ADS)

    Li, Angsheng; Hu, Qifu; Liu, Jun; Pan, Yicheng

    2016-06-01

    Recently, Li and Pan defined the metric of the K-dimensional structure entropy of a structured noisy dataset G to be the information that controls the formation of the K-dimensional structure of G that is evolved by the rules, order and laws of G, excluding the random variations that occur in G. Here, we propose the notion of resistance of networks based on the one- and two-dimensional structural information of graphs. Given a graph G, we define the resistance of G, written , as the greatest overall number of bits required to determine the code of the module that is accessible via random walks with stationary distribution in G, from which the random walks cannot escape. We show that the resistance of networks follows the resistance law of networks, that is, for a network G, the resistance of G is , where and are the one- and two-dimensional structure entropies of G, respectively. Based on the resistance law, we define the security index of a network G to be the normalised resistance of G, that is, . We show that the resistance and security index are both well-defined measures for the security of the networks.

  20. Resistance and Security Index of Networks: Structural Information Perspective of Network Security.

    PubMed

    Li, Angsheng; Hu, Qifu; Liu, Jun; Pan, Yicheng

    2016-06-03

    Recently, Li and Pan defined the metric of the K-dimensional structure entropy of a structured noisy dataset G to be the information that controls the formation of the K-dimensional structure of G that is evolved by the rules, order and laws of G, excluding the random variations that occur in G. Here, we propose the notion of resistance of networks based on the one- and two-dimensional structural information of graphs. Given a graph G, we define the resistance of G, written , as the greatest overall number of bits required to determine the code of the module that is accessible via random walks with stationary distribution in G, from which the random walks cannot escape. We show that the resistance of networks follows the resistance law of networks, that is, for a network G, the resistance of G is , where and are the one- and two-dimensional structure entropies of G, respectively. Based on the resistance law, we define the security index of a network G to be the normalised resistance of G, that is, . We show that the resistance and security index are both well-defined measures for the security of the networks.

  1. Resistance and Security Index of Networks: Structural Information Perspective of Network Security

    PubMed Central

    Li, Angsheng; Hu, Qifu; Liu, Jun; Pan, Yicheng

    2016-01-01

    Recently, Li and Pan defined the metric of the K-dimensional structure entropy of a structured noisy dataset G to be the information that controls the formation of the K-dimensional structure of G that is evolved by the rules, order and laws of G, excluding the random variations that occur in G. Here, we propose the notion of resistance of networks based on the one- and two-dimensional structural information of graphs. Given a graph G, we define the resistance of G, written , as the greatest overall number of bits required to determine the code of the module that is accessible via random walks with stationary distribution in G, from which the random walks cannot escape. We show that the resistance of networks follows the resistance law of networks, that is, for a network G, the resistance of G is , where and are the one- and two-dimensional structure entropies of G, respectively. Based on the resistance law, we define the security index of a network G to be the normalised resistance of G, that is, . We show that the resistance and security index are both well-defined measures for the security of the networks. PMID:27255783

  2. Detecting eavesdropping activity in fiber optic networks

    NASA Astrophysics Data System (ADS)

    MacDonald, Gregory G.

    The secure transmission of data is critical to governments, military organizations, financial institutions, health care providers and other enterprises. The primary method of securing in-transit data is though data encryption. A number of encryption methods exist but the fundamental approach is to assume an eavesdropper has access to the encrypted message but does not have the computing capability to decrypt the message in a timely fashion. Essentially, the strength of security depends on the complexity of the encryption method and the resources available to the eavesdropper. The development of future technologies, most notably quantum computers and quantum computing, is often cited as a direct threat to traditional encryption schemes. It seems reasonable that additional effort should be placed on prohibiting the eavesdropper from coming into possession of the encrypted message in the first place. One strategy for denying possession of the encrypted message is to secure the physical layer of the communications path. Because the majority of transmitted information is over fiber-optic networks, it seems appropriate to consider ways of enhancing the integrity and security of the fiber-based physical layer. The purpose of this research is to investigate the properties of light, as they are manifested in single mode fiber, as a means of insuring the integrity and security of the physical layer of a fiber-optic based communication link. Specifically, the approach focuses on the behavior of polarization in single mode fiber, as it is shown to be especially sensitive to fiber geometry. Fiber geometry is necessarily modified during the placement of optical taps. The problem of detecting activity associated with the placement of an optical tap is herein approached as a supervised machine learning anomaly identification task. The inputs include raw polarization measurements along with additional features derived from various visualizations of the raw data (the inputs are collectively referred to as “features”). Extreme Value Theory (EVT) is proposed as a means of characterizing normal polarization fluctuations in optical fiber. New uses (as anomaly detectors) are proposed for some long-time statistics (Ripley’s K function, its variant the L function, and the Hopkins statistic). These metrics are shown to have good discriminating qualities when identifying anomalous polarization measurements. The metrics have such good performance only simple algorithms are necessary for identifying modifications to fiber geometry.

  3. Empirical Analysis and Automated Classification of Security Bug Reports

    NASA Technical Reports Server (NTRS)

    Tyo, Jacob P.

    2016-01-01

    With the ever expanding amount of sensitive data being placed into computer systems, the need for effective cybersecurity is of utmost importance. However, there is a shortage of detailed empirical studies of security vulnerabilities from which cybersecurity metrics and best practices could be determined. This thesis has two main research goals: (1) to explore the distribution and characteristics of security vulnerabilities based on the information provided in bug tracking systems and (2) to develop data analytics approaches for automatic classification of bug reports as security or non-security related. This work is based on using three NASA datasets as case studies. The empirical analysis showed that the majority of software vulnerabilities belong only to a small number of types. Addressing these types of vulnerabilities will consequently lead to cost efficient improvement of software security. Since this analysis requires labeling of each bug report in the bug tracking system, we explored using machine learning to automate the classification of each bug report as a security or non-security related (two-class classification), as well as each security related bug report as specific security type (multiclass classification). In addition to using supervised machine learning algorithms, a novel unsupervised machine learning approach is proposed. An ac- curacy of 92%, recall of 96%, precision of 92%, probability of false alarm of 4%, F-Score of 81% and G-Score of 90% were the best results achieved during two-class classification. Furthermore, an accuracy of 80%, recall of 80%, precision of 94%, and F-score of 85% were the best results achieved during multiclass classification.

  4. Interactive Mapping of Inundation Metrics Using Cloud Computing for Improved Floodplain Conservation and Management

    NASA Astrophysics Data System (ADS)

    Bulliner, E. A., IV; Lindner, G. A.; Bouska, K.; Paukert, C.; Jacobson, R. B.

    2017-12-01

    Within large-river ecosystems, floodplains serve a variety of important ecological functions. A recent survey of 80 managers of floodplain conservation lands along the Upper and Middle Mississippi and Lower Missouri Rivers in the central United States found that the most critical information needed to improve floodplain management centered on metrics for characterizing depth, extent, frequency, duration, and timing of inundation. These metrics can be delivered to managers efficiently through cloud-based interactive maps. To calculate these metrics, we interpolated an existing one-dimensional hydraulic model for the Lower Missouri River, which simulated water surface elevations at cross sections spaced (<1 km) to sufficiently characterize water surface profiles along an approximately 800 km stretch upstream from the confluence with the Mississippi River over an 80-year record at a daily time step. To translate these water surface elevations to inundation depths, we subtracted a merged terrain model consisting of floodplain LIDAR and bathymetric surveys of the river channel. This approach resulted in a 29000+ day time series of inundation depths across the floodplain using grid cells with 30 m spatial resolution. Initially, we used these data on a local workstation to calculate a suite of nine spatially distributed inundation metrics for the entire model domain. These metrics are calculated on a per pixel basis and encompass a variety of temporal criteria generally relevant to flora and fauna of interest to floodplain managers, including, for example, the average number of days inundated per year within a growing season. Using a local workstation, calculating these metrics for the entire model domain requires several hours. However, for the needs of individual floodplain managers working at site scales, these metrics may be too general and inflexible. Instead of creating a priori a suite of inundation metrics able to satisfy all user needs, we present the usage of Google's cloud-based Earth Engine API to allow users to define and query their own inundation metrics from our dataset and produce maps nearly instantaneously. This approach allows users to select the time periods and inundation depths germane to managing local species, potentially facilitating conservation of floodplain ecosystems.

  5. Correlation between centrality metrics and their application to the opinion model

    NASA Astrophysics Data System (ADS)

    Li, Cong; Li, Qian; Van Mieghem, Piet; Stanley, H. Eugene; Wang, Huijuan

    2015-03-01

    In recent decades, a number of centrality metrics describing network properties of nodes have been proposed to rank the importance of nodes. In order to understand the correlations between centrality metrics and to approximate a high-complexity centrality metric by a strongly correlated low-complexity metric, we first study the correlation between centrality metrics in terms of their Pearson correlation coefficient and their similarity in ranking of nodes. In addition to considering the widely used centrality metrics, we introduce a new centrality measure, the degree mass. The mth-order degree mass of a node is the sum of the weighted degree of the node and its neighbors no further than m hops away. We find that the betweenness, the closeness, and the components of the principal eigenvector of the adjacency matrix are strongly correlated with the degree, the 1st-order degree mass and the 2nd-order degree mass, respectively, in both network models and real-world networks. We then theoretically prove that the Pearson correlation coefficient between the principal eigenvector and the 2nd-order degree mass is larger than that between the principal eigenvector and a lower order degree mass. Finally, we investigate the effect of the inflexible contrarians selected based on different centrality metrics in helping one opinion to compete with another in the inflexible contrarian opinion (ICO) model. Interestingly, we find that selecting the inflexible contrarians based on the leverage, the betweenness, or the degree is more effective in opinion-competition than using other centrality metrics in all types of networks. This observation is supported by our previous observations, i.e., that there is a strong linear correlation between the degree and the betweenness, as well as a high centrality similarity between the leverage and the degree.

  6. Retinal Vascular and Oxygen Temporal Dynamic Responses to Light Flicker in Humans

    PubMed Central

    Felder, Anthony E.; Wanek, Justin; Blair, Norman P.

    2017-01-01

    Purpose To mathematically model the temporal dynamic responses of retinal vessel diameter (D), oxygen saturation (SO2), and inner retinal oxygen extraction fraction (OEF) to light flicker and to describe their responses to its cessation in humans. Methods In 16 healthy subjects (age: 60 ± 12 years), retinal oximetry was performed before, during, and after light flicker stimulation. At each time point, five metrics were measured: retinal arterial and venous D (DA, DV) and SO2 (SO2A, SO2V), and OEF. Intra- and intersubject variability of metrics was assessed by coefficient of variation of measurements before flicker within and among subjects, respectively. Metrics during flicker were modeled by exponential functions to determine the flicker-induced steady state metric values and the time constants of changes. Metrics after the cessation of flicker were compared to those before flicker. Results Intra- and intersubject variability for all metrics were less than 6% and 16%, respectively. At the flicker-induced steady state, DA and DV increased by 5%, SO2V increased by 7%, and OEF decreased by 13%. The time constants of DA and DV (14, 15 seconds) were twofold smaller than those of SO2V and OEF (39, 34 seconds). Within 26 seconds after the cessation of flicker, all metrics were not significantly different from before flicker values (P ≥ 0.07). Conclusions Mathematical modeling revealed considerable differences in the time courses of changes among metrics during flicker, indicating flicker duration should be considered separately for each metric. Future application of this method may be useful to elucidate alterations in temporal dynamic responses to light flicker due to retinal diseases. PMID:29098297

  7. Bayesian model evidence as a model evaluation metric

    NASA Astrophysics Data System (ADS)

    Guthke, Anneli; Höge, Marvin; Nowak, Wolfgang

    2017-04-01

    When building environmental systems models, we are typically confronted with the questions of how to choose an appropriate model (i.e., which processes to include or neglect) and how to measure its quality. Various metrics have been proposed that shall guide the modeller towards a most robust and realistic representation of the system under study. Criteria for evaluation often address aspects of accuracy (absence of bias) or of precision (absence of unnecessary variance) and need to be combined in a meaningful way in order to address the inherent bias-variance dilemma. We suggest using Bayesian model evidence (BME) as a model evaluation metric that implicitly performs a tradeoff between bias and variance. BME is typically associated with model weights in the context of Bayesian model averaging (BMA). However, it can also be seen as a model evaluation metric in a single-model context or in model comparison. It combines a measure for goodness of fit with a penalty for unjustifiable complexity. Unjustifiable refers to the fact that the appropriate level of model complexity is limited by the amount of information available for calibration. Derived in a Bayesian context, BME naturally accounts for measurement errors in the calibration data as well as for input and parameter uncertainty. BME is therefore perfectly suitable to assess model quality under uncertainty. We will explain in detail and with schematic illustrations what BME measures, i.e. how complexity is defined in the Bayesian setting and how this complexity is balanced with goodness of fit. We will further discuss how BME compares to other model evaluation metrics that address accuracy and precision such as the predictive logscore or other model selection criteria such as the AIC, BIC or KIC. Although computationally more expensive than other metrics or criteria, BME represents an appealing alternative because it provides a global measure of model quality. Even if not applicable to each and every case, we aim at stimulating discussion about how to judge the quality of hydrological models in the presence of uncertainty in general by dissecting the mechanism behind BME.

  8. Performance of the METRIC model in estimating evapotranspiration fluxes over an irrigated field in Saudi Arabia using Landsat-8 images

    NASA Astrophysics Data System (ADS)

    Madugundu, Rangaswamy; Al-Gaadi, Khalid A.; Tola, ElKamil; Hassaballa, Abdalhaleem A.; Patil, Virupakshagouda C.

    2017-12-01

    Accurate estimation of evapotranspiration (ET) is essential for hydrological modeling and efficient crop water management in hyper-arid climates. In this study, we applied the METRIC algorithm on Landsat-8 images, acquired from June to October 2013, for the mapping of ET of a 50 ha center-pivot irrigated alfalfa field in the eastern region of Saudi Arabia. The METRIC-estimated energy balance components and ET were evaluated against the data provided by an eddy covariance (EC) flux tower installed in the field. Results indicated that the METRIC algorithm provided accurate ET estimates over the study area, with RMSE values of 0.13 and 4.15 mm d-1. The METRIC algorithm was observed to perform better in full canopy conditions compared to partial canopy conditions. On average, the METRIC algorithm overestimated the hourly ET by 6.6 % in comparison to the EC measurements; however, the daily ET was underestimated by 4.2 %.

  9. An evaluation of the regional supply of biomass at three midwestern sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    English, B.C.; Dillivan, K.D.; Ojo, M.A.

    1993-12-31

    Research has been conducted on both the agronomy and the conversion of biomass. However, few studies have been initiated that combine the knowledge of growing biomass with site specific resource availability information. An economic appraisal of how much biomass might be grown in a specific area for a given price has only just been initiated. This paper examines the economics of introducing biomass production to three midwest representative areas centered on the following counties, Orange County, Indiana; Olmsted County, Minnesota; and Cass County, North Dakota. Using a regional linear programming model, estimates of economic feasibility as well as environmental impactsmore » are made. At a price of $53 per metric ton the biomass supplied to the plant gate is equal to 183,251 metric tons. At $62 per metric ton the biomass supply has increased to almost 1 million metric tons. The model predicts a maximum price of $88 per metric ton and at this price, 2,748,476 metric tons of biomass are produced.« less

  10. Automated Assessment of Visual Quality of Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)

    1997-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  11. Evaluating true BCI communication rate through mutual information and language models.

    PubMed

    Speier, William; Arnold, Corey; Pouratian, Nader

    2013-01-01

    Brain-computer interface (BCI) systems are a promising means for restoring communication to patients suffering from "locked-in" syndrome. Research to improve system performance primarily focuses on means to overcome the low signal to noise ratio of electroencephalogric (EEG) recordings. However, the literature and methods are difficult to compare due to the array of evaluation metrics and assumptions underlying them, including that: 1) all characters are equally probable, 2) character selection is memoryless, and 3) errors occur completely at random. The standardization of evaluation metrics that more accurately reflect the amount of information contained in BCI language output is critical to make progress. We present a mutual information-based metric that incorporates prior information and a model of systematic errors. The parameters of a system used in one study were re-optimized, showing that the metric used in optimization significantly affects the parameter values chosen and the resulting system performance. The results of 11 BCI communication studies were then evaluated using different metrics, including those previously used in BCI literature and the newly advocated metric. Six studies' results varied based on the metric used for evaluation and the proposed metric produced results that differed from those originally published in two of the studies. Standardizing metrics to accurately reflect the rate of information transmission is critical to properly evaluate and compare BCI communication systems and advance the field in an unbiased manner.

  12. Tools for quantifying isotopic niche space and dietary variation at the individual and population level.

    USGS Publications Warehouse

    Newsome, Seth D.; Yeakel, Justin D.; Wheatley, Patrick V.; Tinker, M. Tim

    2012-01-01

    Ecologists are increasingly using stable isotope analysis to inform questions about variation in resource and habitat use from the individual to community level. In this study we investigate data sets from 2 California sea otter (Enhydra lutris nereis) populations to illustrate the advantages and potential pitfalls of applying various statistical and quantitative approaches to isotopic data. We have subdivided these tools, or metrics, into 3 categories: IsoSpace metrics, stable isotope mixing models, and DietSpace metrics. IsoSpace metrics are used to quantify the spatial attributes of isotopic data that are typically presented in bivariate (e.g., δ13C versus δ15N) 2-dimensional space. We review IsoSpace metrics currently in use and present a technique by which uncertainty can be included to calculate the convex hull area of consumers or prey, or both. We then apply a Bayesian-based mixing model to quantify the proportion of potential dietary sources to the diet of each sea otter population and compare this to observational foraging data. Finally, we assess individual dietary specialization by comparing a previously published technique, variance components analysis, to 2 novel DietSpace metrics that are based on mixing model output. As the use of stable isotope analysis in ecology continues to grow, the field will need a set of quantitative tools for assessing isotopic variance at the individual to community level. Along with recent advances in Bayesian-based mixing models, we hope that the IsoSpace and DietSpace metrics described here will provide another set of interpretive tools for ecologists.

  13. A New Metric for Quantifying Performance Impairment on the Psychomotor Vigilance Test

    DTIC Science & Technology

    2012-01-01

    used the coefficient of determination (R2) and the P-values based on Bartelss test of randomness of the residual error to quantify the goodness - of - fit ...we used the goodness - of - fit between each metric and the corresponding individualized two-process model output (Rajaraman et al., 2008, 2009) to assess...individualized two-process model fits for each of the 12 subjects using the five metrics. The P-values are for Bartelss

  14. Economic Incentives for Cybersecurity: Using Economics to Design Technologies Ready for Deployment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishik, Claire; Sheldon, Frederick T; Ott, David

    Cybersecurity practice lags behind cyber technology achievements. Solutions designed to address many problems may and do exist but frequently cannot be broadly deployed due to economic constraints. Whereas security economics focuses on the cost/benefit analysis and supply/demand, we believe that more sophisticated theoretical approaches, such as economic modeling, rarely utilized, would derive greater societal benefits. Unfortunately, today technologists pursuing interesting and elegant solutions have little knowledge of the feasibility for broad deployment of their results and cannot anticipate the influences of other technologies, existing infrastructure, and technology evolution, nor bring the solutions lifecycle into the equation. Additionally, potentially viable solutionsmore » are not adopted because the risk perceptions by potential providers and users far outweighs the economic incentives to support introduction/adoption of new best practices and technologies that are not well enough defined. In some cases, there is no alignment with redominant and future business models as well as regulatory and policy requirements. This paper provides an overview of the economics of security, reviewing work that helped to define economic models for the Internet economy from the 1990s. We bring forward examples of potential use of theoretical economics in defining metrics for emerging technology areas, positioning infrastructure investment, and building real-time response capability as part of software development. These diverse examples help us understand the gaps in current research. Filling these gaps will be instrumental for defining viable economic incentives, economic policies, regulations as well as early-stage technology development approaches, that can speed up commercialization and deployment of new technologies in cybersecurity.« less

  15. Enhancing the Simplified Surface Energy Balance (SSEB) Approach for Estimating Landscape ET: Validation with the METRIC model

    USGS Publications Warehouse

    Senay, Gabriel B.; Budde, Michael E.; Verdin, James P.

    2011-01-01

    Evapotranspiration (ET) can be derived from satellite data using surface energy balance principles. METRIC (Mapping EvapoTranspiration at high Resolution with Internalized Calibration) is one of the most widely used models available in the literature to estimate ET from satellite imagery. The Simplified Surface Energy Balance (SSEB) model is much easier and less expensive to implement. The main purpose of this research was to present an enhanced version of the Simplified Surface Energy Balance (SSEB) model and to evaluate its performance using the established METRIC model. In this study, SSEB and METRIC ET fractions were compared using 7 Landsat images acquired for south central Idaho during the 2003 growing season. The enhanced SSEB model compared well with the METRIC model output exhibiting an r2 improvement from 0.83 to 0.90 in less complex topography (elevation less than 2000 m) and with an improvement of r2 from 0.27 to 0.38 in more complex (mountain) areas with elevation greater than 2000 m. Independent evaluation showed that both models exhibited higher variation in complex topographic regions, although more with SSEB than with METRIC. The higher ET fraction variation in the complex mountainous regions highlighted the difficulty of capturing the radiation and heat transfer physics on steep slopes having variable aspect with the simple index model, and the need to conduct more research. However, the temporal consistency of the results suggests that the SSEB model can be used on a wide range of elevation (more successfully up 2000 m) to detect anomalies in space and time for water resources management and monitoring such as for drought early warning systems in data scarce regions. SSEB has a potential for operational agro-hydrologic applications to estimate ET with inputs of surface temperature, NDVI, DEM and reference ET.

  16. Enhancing the Simplified Surface Energy Balance (SSEB) approach for estimating landscape ET: Validation with the METRIC model

    USGS Publications Warehouse

    Senay, G.B.; Budde, M.E.; Verdin, J.P.

    2011-01-01

    Evapotranspiration (ET) can be derived from satellite data using surface energy balance principles. METRIC (Mapping EvapoTranspiration at high Resolution with Internalized Calibration) is one of the most widely used models available in the literature to estimate ET from satellite imagery. The Simplified Surface Energy Balance (SSEB) model is much easier and less expensive to implement. The main purpose of this research was to present an enhanced version of the Simplified Surface Energy Balance (SSEB) model and to evaluate its performance using the established METRIC model. In this study, SSEB and METRIC ET fractions were compared using 7 Landsat images acquired for south central Idaho during the 2003 growing season. The enhanced SSEB model compared well with the METRIC model output exhibiting an r2 improvement from 0.83 to 0.90 in less complex topography (elevation less than 2000m) and with an improvement of r2 from 0.27 to 0.38 in more complex (mountain) areas with elevation greater than 2000m. Independent evaluation showed that both models exhibited higher variation in complex topographic regions, although more with SSEB than with METRIC. The higher ET fraction variation in the complex mountainous regions highlighted the difficulty of capturing the radiation and heat transfer physics on steep slopes having variable aspect with the simple index model, and the need to conduct more research. However, the temporal consistency of the results suggests that the SSEB model can be used on a wide range of elevation (more successfully up 2000m) to detect anomalies in space and time for water resources management and monitoring such as for drought early warning systems in data scarce regions. SSEB has a potential for operational agro-hydrologic applications to estimate ET with inputs of surface temperature, NDVI, DEM and reference ET. ?? 2010.

  17. Counter-terrorism threat prediction architecture

    NASA Astrophysics Data System (ADS)

    Lehman, Lynn A.; Krause, Lee S.

    2004-09-01

    This paper will evaluate the feasibility of constructing a system to support intelligence analysts engaged in counter-terrorism. It will discuss the use of emerging techniques to evaluate a large-scale threat data repository (or Infosphere) and comparing analyst developed models to identify and discover potential threat-related activity with a uncertainty metric used to evaluate the threat. This system will also employ the use of psychological (or intent) modeling to incorporate combatant (i.e. terrorist) beliefs and intent. The paper will explore the feasibility of constructing a hetero-hierarchical (a hierarchy of more than one kind or type characterized by loose connection/feedback among elements of the hierarchy) agent based framework or "family of agents" to support "evidence retrieval" defined as combing, or searching the threat data repository and returning information with an uncertainty metric. The counter-terrorism threat prediction architecture will be guided by a series of models, constructed to represent threat operational objectives, potential targets, or terrorist objectives. The approach would compare model representations against information retrieved by the agent family to isolate or identify patterns that match within reasonable measures of proximity. The central areas of discussion will be the construction of an agent framework to search the available threat related information repository, evaluation of results against models that will represent the cultural foundations, mindset, sociology and emotional drive of typical threat combatants (i.e. the mind and objectives of a terrorist), and the development of evaluation techniques to compare result sets with the models representing threat behavior and threat targets. The applicability of concepts surrounding Modeling Field Theory (MFT) will be discussed as the basis of this research into development of proximity measures between the models and result sets and to provide feedback in support of model adaptation (learning). The increasingly complex demands facing analysts evaluating activity threatening to the security of the United States make the family of agent-based data collection (fusion) a promising area. This paper will discuss a system to support the collection and evaluation of potential threat activity as well as an approach fro presentation of the information.

  18. Healthcare4VideoStorm: Making Smart Decisions Based on Storm Metrics.

    PubMed

    Zhang, Weishan; Duan, Pengcheng; Chen, Xiufeng; Lu, Qinghua

    2016-04-23

    Storm-based stream processing is widely used for real-time large-scale distributed processing. Knowing the run-time status and ensuring performance is critical to providing expected dependability for some applications, e.g., continuous video processing for security surveillance. The existing scheduling strategies' granularity is too coarse to have good performance, and mainly considers network resources without computing resources while scheduling. In this paper, we propose Healthcare4Storm, a framework that finds Storm insights based on Storm metrics to gain knowledge from the health status of an application, finally ending up with smart scheduling decisions. It takes into account both network and computing resources and conducts scheduling at a fine-grained level using tuples instead of topologies. The comprehensive evaluation shows that the proposed framework has good performance and can improve the dependability of the Storm-based applications.

  19. Enhancing security of fingerprints through contextual biometric watermarking.

    PubMed

    Noore, Afzel; Singh, Richa; Vatsa, Mayank; Houck, Max M

    2007-07-04

    This paper presents a novel digital watermarking technique using face and demographic text data as multiple watermarks for verifying the chain of custody and protecting the integrity of a fingerprint image. The watermarks are embedded in selected texture regions of a fingerprint image using discrete wavelet transform. Experimental results show that modifications in these locations are visually imperceptible and maintain the minutiae details. The integrity of the fingerprint image is verified through the high matching scores obtained from an automatic fingerprint identification system. There is also a high degree of visual correlation between the embedded images, and the extracted images from the watermarked fingerprint. The degree of similarity is computed using pixel-based metrics and human visual system metrics. The results also show that the proposed watermarked fingerprint and the extracted images are resilient to common attacks such as compression, filtering, and noise.

  20. Planning for the Introduction of the Metric System into Occupational Education Programs

    ERIC Educational Resources Information Center

    Low, A. W.

    1974-01-01

    A three-dimensional planning model for introducting the International System of Metric Units into Canadian occupational education curricula includes employment level, career area, and metric topics. A fourth dimension, time, is considered in four separate phases: familiarization, adoption, conversion, and compulsory usage.

  1. Numerical studies and metric development for validation of magnetohydrodynamic models on the HIT-SI experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, C., E-mail: hansec@uw.edu; Columbia University, New York, New York 10027; Victor, B.

    We present application of three scalar metrics derived from the Biorthogonal Decomposition (BD) technique to evaluate the level of agreement between macroscopic plasma dynamics in different data sets. BD decomposes large data sets, as produced by distributed diagnostic arrays, into principal mode structures without assumptions on spatial or temporal structure. These metrics have been applied to validation of the Hall-MHD model using experimental data from the Helicity Injected Torus with Steady Inductive helicity injection experiment. Each metric provides a measure of correlation between mode structures extracted from experimental data and simulations for an array of 192 surface-mounted magnetic probes. Numericalmore » validation studies have been performed using the NIMROD code, where the injectors are modeled as boundary conditions on the flux conserver, and the PSI-TET code, where the entire plasma volume is treated. Initial results from a comprehensive validation study of high performance operation with different injector frequencies are presented, illustrating application of the BD method. Using a simplified (constant, uniform density and temperature) Hall-MHD model, simulation results agree with experimental observation for two of the three defined metrics when the injectors are driven with a frequency of 14.5 kHz.« less

  2. Wireless Sensors Grouping Proofs for Medical Care and Ambient Assisted-Living Deployment

    PubMed Central

    Trček, Denis

    2016-01-01

    Internet of Things (IoT) devices are rapidly penetrating e-health and assisted living domains, and an increasing proportion among them goes on the account of computationally-weak devices, where security and privacy provisioning alone are demanding tasks, not to mention grouping proofs. This paper, therefore, gives an extensive analysis of such proofs and states lessons learnt to avoid possible pitfalls in future designs. It sticks with prudent engineering techniques in this field and deploys in a novel way the so called non-deterministic principle to provide not only grouping proofs, but (among other) also privacy. The developed solution is analyzed by means of a tangible metric and it is shown to be lightweight, and formally for security. PMID:26729131

  3. Wireless Sensors Grouping Proofs for Medical Care and Ambient Assisted-Living Deployment.

    PubMed

    Trček, Denis

    2016-01-02

    Internet of Things (IoT) devices are rapidly penetrating e-health and assisted living domains, and an increasing proportion among them goes on the account of computationally-weak devices, where security and privacy provisioning alone are demanding tasks, not to mention grouping proofs. This paper, therefore, gives an extensive analysis of such proofs and states lessons learnt to avoid possible pitfalls in future designs. It sticks with prudent engineering techniques in this field and deploys in a novel way the so called non-deterministic principle to provide not only grouping proofs, but (among other) also privacy. The developed solution is analyzed by means of a tangible metric and it is shown to be lightweight, and formally for security.

  4. Developing a Framework for Evaluating Organizational Information Assurance Metrics Programs

    DTIC Science & Technology

    2007-03-01

    least cost.  Standards  such as  ISO /IEC 17799 and  ISO /IEC  27001  provide guidance on the domains that  security management should consider when... ISO /IEC 17799, 2000;  ISO /IEC  27001 ,  2005).        6    In order to attempt to find this optimal mix, organizations can make risk  decisions weighing...Electronic version].     International Organization of Standards.  (2000).   ISO /IEC  27001 .  Information  Technology Security Techniques:  Information

  5. Hydrologic modeling for monitoring water availability in Africa and the Middle East

    NASA Astrophysics Data System (ADS)

    McNally, A.; Getirana, A.; Arsenault, K. R.; Peters-Lidard, C. D.; Verdin, J. P.

    2015-12-01

    Drought impacts water resources required by crops and communities, in turn threatening lives and livelihoods. Early warning systems, which rely on inputs from hydro-climate models, are used to help manage risk and provide humanitarian assistance to the right place at the right time. However, translating advancements in hydro-climate science into action is a persistent and time-consuming challenge: scientists and decision-makers need to work together to enhance the salience, credibility, and legitimacy of the hydrological data products being produced. One organization that tackles this challenge is the Famine Early Warning Systems Network (FEWS NET), which has been using evidence-based approaches to address food security since the 1980s.In this presentation, we describe the FEWS NET Land Data Assimilation System (FLDAS), developed by FEWS NET and NASA hydrologic scientists to maximize the use of limited hydro-climatic observations for humanitarian applications. The FLDAS, an instance of the NASA Land Information System (LIS), is comprised of land surface models driven by satellite rainfall inputs already familiar to FEWS NET food security analysts. First, we evaluate the quality of model outputs over parts of the Middle East and Africa using remotely sensed soil moisture and vegetation indices. We then describe derived water availability indices that have been identified by analysts as potentially useful sources of information. Specifically, we demonstrate how the Baseline Water Stress and Drought Severity Index detect recent water availability crisis events in the Tigris-Euphrates Basin and the Gaborone Reservoir, Botswana. Finally we discuss ongoing work to deliver this information to FEWS NET analysts in a timely and user-friendly manner, with the ultimate goal of integrating these water availability metrics into regular decision-making activities.

  6. Spatial modelling of landscape aesthetic potential in urban-rural fringes.

    PubMed

    Sahraoui, Yohan; Clauzel, Céline; Foltête, Jean-Christophe

    2016-10-01

    The aesthetic potential of landscape has to be modelled to provide tools for land-use planning. This involves identifying landscape attributes and revealing individuals' landscape preferences. Landscape aesthetic judgments of individuals (n = 1420) were studied by means of a photo-based survey. A set of landscape visibility metrics was created to measure landscape composition and configuration in each photograph using spatial data. These metrics were used as explanatory variables in multiple linear regressions to explain aesthetic judgments. We demonstrate that landscape aesthetic judgments may be synthesized in three consensus groups. The statistical results obtained show that landscape visibility metrics have good explanatory power. Ultimately, we propose a spatial modelling of landscape aesthetic potential based on these results combined with systematic computation of visibility metrics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. [Perceptual sharpness metric for visible and infrared color fusion images].

    PubMed

    Gao, Shao-Shu; Jin, Wei-Qi; Wang, Xia; Wang, Ling-Xue; Luo, Yuan

    2012-12-01

    For visible and infrared color fusion images, objective sharpness assessment model is proposed to measure the clarity of detail and edge definition of the fusion image. Firstly, the contrast sensitivity functions (CSF) of the human visual system is used to reduce insensitive frequency components under certain viewing conditions. Secondly, perceptual contrast model, which takes human luminance masking effect into account, is proposed based on local band-limited contrast model. Finally, the perceptual contrast is calculated in the region of interest (contains image details and edges) in the fusion image to evaluate image perceptual sharpness. Experimental results show that the proposed perceptual sharpness metrics provides better predictions, which are more closely matched to human perceptual evaluations, than five existing sharpness (blur) metrics for color images. The proposed perceptual sharpness metrics can evaluate the perceptual sharpness for color fusion images effectively.

  8. Two-dimensional habitat modeling in the Yellowstone/Upper Missouri River system

    USGS Publications Warehouse

    Waddle, T. J.; Bovee, K.D.; Bowen, Z.H.

    1997-01-01

    This study is being conducted to provide the aquatic biology component of a decision support system being developed by the U.S. Bureau of Reclamation. In an attempt to capture the habitat needs of Great Plains fish communities we are looking beyond previous habitat modeling methods. Traditional habitat modeling approaches have relied on one-dimensional hydraulic models and lumped compositional habitat metrics to describe aquatic habitat. A broader range of habitat descriptors is available when both composition and configuration of habitats is considered. Habitat metrics that consider both composition and configuration can be adapted from terrestrial biology. These metrics are most conveniently accessed with spatially explicit descriptors of the physical variables driving habitat composition. Two-dimensional hydrodynamic models have advanced to the point that they may provide the spatially explicit description of physical parameters needed to address this problem. This paper reports progress to date on applying two-dimensional hydraulic and habitat models on the Yellowstone and Missouri Rivers and uses examples from the Yellowstone River to illustrate the configurational metrics as a new tool for assessing riverine habitats.

  9. A Metrics-Based Approach to Intrusion Detection System Evaluation for Distributed Real-Time Systems

    DTIC Science & Technology

    2002-04-01

    Based Approach to Intrusion Detection System Evaluation for Distributed Real - Time Systems Authors: G. A. Fink, B. L. Chappell, T. G. Turner, and...Distributed, Security. 1 Introduction Processing and cost requirements are driving future naval combat platforms to use distributed, real - time systems of...distributed, real - time systems . As these systems grow more complex, the timing requirements do not diminish; indeed, they may become more constrained

  10. A Go-to-Market Strategy: Promoting Private Sector Solutions to the Threat of Proliferation

    DTIC Science & Technology

    2013-04-01

    indicators reveal that these problems, often subsumed under the seemingly innocuous heading of “transnational threats,” are a growing cancer on the...trade is worth an estimated $322 billion annually with 52,356 metric tons of opium, cannabis , cocaine, and amphetamine-type stimulant (ATS...of medical isotopes to the sites that secure the material. 30 Regulators are also now starting to consider another critical component in the

  11. A Bibliography of Selected Publications: Project Air Force, 5th Edition

    DTIC Science & Technology

    1989-05-01

    Dyna - R-3028-AF. A Dynamic Retention Model for Air Force Officers: METRIC’s DL and and Pipeilne Variability. M. J. Carrillo. Theory and Estimates. G...Theorem and Dyna - and Support. METRIC’s Demand and Pipeline Variability. R-3255-AF. Aircraft Airframe Cost Estimating Relationships: N-2283/1-AF...U). 1970-1985. N-2409-AF. Tanker Splitting Across the SlOP Bomber Force R-3389-AF. Dyna -METRIC Version 4: Modeling Worldwide (U). Logistics Support of

  12. Quality metrics in high-dimensional data visualization: an overview and systematization.

    PubMed

    Bertini, Enrico; Tatu, Andrada; Keim, Daniel

    2011-12-01

    In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research. © 2010 IEEE

  13. Stream macroinvertebrate response models for bioassessment metrics: addressing the issue of spatial scale

    USGS Publications Warehouse

    White, Ian R.; Kennen, Jonathan G.; May, Jason T.; Brown, Larry R.; Cuffney, Thomas F.; Jones, Kimberly A.; Orlando, James L.

    2014-01-01

    We developed independent predictive disturbance models for a full regional data set and four individual ecoregions (Full Region vs. Individual Ecoregion models) to evaluate effects of spatial scale on the assessment of human landscape modification, on predicted response of stream biota, and the effect of other possible confounding factors, such as watershed size and elevation, on model performance. We selected macroinvertebrate sampling sites for model development (n = 591) and validation (n = 467) that met strict screening criteria from four proximal ecoregions in the northeastern U.S.: North Central Appalachians, Ridge and Valley, Northeastern Highlands, and Northern Piedmont. Models were developed using boosted regression tree (BRT) techniques for four macroinvertebrate metrics; results were compared among ecoregions and metrics. Comparing within a region but across the four macroinvertebrate metrics, the average richness of tolerant taxa (RichTOL) had the highest R2 for BRT models. Across the four metrics, final BRT models had between four and seven explanatory variables and always included a variable related to urbanization (e.g., population density, percent urban, or percent manmade channels), and either a measure of hydrologic runoff (e.g., minimum April, average December, or maximum monthly runoff) and(or) a natural landscape factor (e.g., riparian slope, precipitation, and elevation), or a measure of riparian disturbance. Contrary to our expectations, Full Region models explained nearly as much variance in the macroinvertebrate data as Individual Ecoregion models, and taking into account watershed size or elevation did not appear to improve model performance. As a result, it may be advantageous for bioassessment programs to develop large regional models as a preliminary assessment of overall disturbance conditions as long as the range in natural landscape variability is not excessive.

  14. Stream Macroinvertebrate Response Models for Bioassessment Metrics: Addressing the Issue of Spatial Scale

    PubMed Central

    Waite, Ian R.; Kennen, Jonathan G.; May, Jason T.; Brown, Larry R.; Cuffney, Thomas F.; Jones, Kimberly A.; Orlando, James L.

    2014-01-01

    We developed independent predictive disturbance models for a full regional data set and four individual ecoregions (Full Region vs. Individual Ecoregion models) to evaluate effects of spatial scale on the assessment of human landscape modification, on predicted response of stream biota, and the effect of other possible confounding factors, such as watershed size and elevation, on model performance. We selected macroinvertebrate sampling sites for model development (n = 591) and validation (n = 467) that met strict screening criteria from four proximal ecoregions in the northeastern U.S.: North Central Appalachians, Ridge and Valley, Northeastern Highlands, and Northern Piedmont. Models were developed using boosted regression tree (BRT) techniques for four macroinvertebrate metrics; results were compared among ecoregions and metrics. Comparing within a region but across the four macroinvertebrate metrics, the average richness of tolerant taxa (RichTOL) had the highest R2 for BRT models. Across the four metrics, final BRT models had between four and seven explanatory variables and always included a variable related to urbanization (e.g., population density, percent urban, or percent manmade channels), and either a measure of hydrologic runoff (e.g., minimum April, average December, or maximum monthly runoff) and(or) a natural landscape factor (e.g., riparian slope, precipitation, and elevation), or a measure of riparian disturbance. Contrary to our expectations, Full Region models explained nearly as much variance in the macroinvertebrate data as Individual Ecoregion models, and taking into account watershed size or elevation did not appear to improve model performance. As a result, it may be advantageous for bioassessment programs to develop large regional models as a preliminary assessment of overall disturbance conditions as long as the range in natural landscape variability is not excessive. PMID:24675770

  15. Defining metrics of the Quasi-Biennial Oscillation in global climate models

    NASA Astrophysics Data System (ADS)

    Schenzinger, Verena; Osprey, Scott; Gray, Lesley; Butchart, Neal

    2017-06-01

    As the dominant mode of variability in the tropical stratosphere, the Quasi-Biennial Oscillation (QBO) has been subject to extensive research. Though there is a well-developed theory of this phenomenon being forced by wave-mean flow interaction, simulating the QBO adequately in global climate models still remains difficult. This paper presents a set of metrics to characterize the morphology of the QBO using a number of different reanalysis datasets and the FU Berlin radiosonde observation dataset. The same metrics are then calculated from Coupled Model Intercomparison Project 5 and Chemistry-Climate Model Validation Activity 2 simulations which included a representation of QBO-like behaviour to evaluate which aspects of the QBO are well captured by the models and which ones remain a challenge for future model development.

  16. Network Metamodeling: The Effect of Correlation Metric Choice on Phylogenomic and Transcriptomic Network Topology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weighill, Deborah A; Jacobson, Daniel A

    We explore the use of a network meta-modeling approach to compare the effects of similarity metrics used to construct biological networks on the topology of the resulting networks. This work reviews various similarity metrics for the construction of networks and various topology measures for the characterization of resulting network topology, demonstrating the use of these metrics in the construction and comparison of phylogenomic and transcriptomic networks.

  17. Quantifying seascape structure: Extending terrestrial spatial pattern metrics to the marine realm

    USGS Publications Warehouse

    Wedding, L.M.; Christopher, L.A.; Pittman, S.J.; Friedlander, A.M.; Jorgensen, S.

    2011-01-01

    Spatial pattern metrics have routinely been applied to characterize and quantify structural features of terrestrial landscapes and have demonstrated great utility in landscape ecology and conservation planning. The important role of spatial structure in ecology and management is now commonly recognized, and recent advances in marine remote sensing technology have facilitated the application of spatial pattern metrics to the marine environment. However, it is not yet clear whether concepts, metrics, and statistical techniques developed for terrestrial ecosystems are relevant for marine species and seascapes. To address this gap in our knowledge, we reviewed, synthesized, and evaluated the utility and application of spatial pattern metrics in the marine science literature over the past 30 yr (1980 to 2010). In total, 23 studies characterized seascape structure, of which 17 quantified spatial patterns using a 2-dimensional patch-mosaic model and 5 used a continuously varying 3-dimensional surface model. Most seascape studies followed terrestrial-based studies in their search for ecological patterns and applied or modified existing metrics. Only 1 truly unique metric was found (hydrodynamic aperture applied to Pacific atolls). While there are still relatively few studies using spatial pattern metrics in the marine environment, they have suffered from similar misuse as reported for terrestrial studies, such as the lack of a priori considerations or the problem of collinearity between metrics. Spatial pattern metrics offer great potential for ecological research and environmental management in marine systems, and future studies should focus on (1) the dynamic boundary between the land and sea; (2) quantifying 3-dimensional spatial patterns; and (3) assessing and monitoring seascape change. ?? Inter-Research 2011.

  18. A New Metric for Land-Atmosphere Coupling Strength: Applications on Observations and Modeling

    NASA Astrophysics Data System (ADS)

    Tang, Q.; Xie, S.; Zhang, Y.; Phillips, T. J.; Santanello, J. A., Jr.; Cook, D. R.; Riihimaki, L.; Gaustad, K.

    2017-12-01

    A new metric is proposed to quantify the land-atmosphere (LA) coupling strength and is elaborated by correlating the surface evaporative fraction and impacting land and atmosphere variables (e.g., soil moisture, vegetation, and radiation). Based upon multiple linear regression, this approach simultaneously considers multiple factors and thus represents complex LA coupling mechanisms better than existing single variable metrics. The standardized regression coefficients quantify the relative contributions from individual drivers in a consistent manner, avoiding the potential inconsistency in relative influence of conventional metrics. Moreover, the unique expendable feature of the new method allows us to verify and explore potentially important coupling mechanisms. Our observation-based application of the new metric shows moderate coupling with large spatial variations at the U.S. Southern Great Plains. The relative importance of soil moisture vs. vegetation varies by location. We also show that LA coupling strength is generally underestimated by single variable methods due to their incompleteness. We also apply this new metric to evaluate the representation of LA coupling in the Accelerated Climate Modeling for Energy (ACME) V1 Contiguous United States (CONUS) regionally refined model (RRM). This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-734201

  19. The model for Fundamentals of Endovascular Surgery (FEVS) successfully defines the competent endovascular surgeon.

    PubMed

    Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Sheahan, Malachi G; Shames, Murray L; Lee, Jason T; Bismuth, Jean

    2015-12-01

    Fundamental skills testing is now required for certification in general surgery. No model for assessing fundamental endovascular skills exists. Our objective was to develop a model that tests the fundamental endovascular skills and differentiates competent from noncompetent performance. The Fundamentals of Endovascular Surgery model was developed in silicon and virtual-reality versions. Twenty individuals (with a range of experience) performed four tasks on each model in three separate sessions. Tasks on the silicon model were performed under fluoroscopic guidance, and electromagnetic tracking captured motion metrics for catheter tip position. Image processing captured tool tip position and motion on the virtual model. Performance was evaluated using a global rating scale, blinded video assessment of error metrics, and catheter tip movement and position. Motion analysis was based on derivations of speed and position that define proficiency of movement (spectral arc length, duration of submovement, and number of submovements). Performance was significantly different between competent and noncompetent interventionalists for the three performance measures of motion metrics, error metrics, and global rating scale. The mean error metric score was 6.83 for noncompetent individuals and 2.51 for the competent group (P < .0001). Median global rating scores were 2.25 for the noncompetent group and 4.75 for the competent users (P < .0001). The Fundamentals of Endovascular Surgery model successfully differentiates competent and noncompetent performance of fundamental endovascular skills based on a series of objective performance measures. This model could serve as a platform for skills testing for all trainees. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  20. Metrics for the Diurnal Cycle of Precipitation: Toward Routine Benchmarks for Climate Models

    DOE PAGES

    Covey, Curt; Gleckler, Peter J.; Doutriaux, Charles; ...

    2016-06-08

    In this paper, metrics are proposed—that is, a few summary statistics that condense large amounts of data from observations or model simulations—encapsulating the diurnal cycle of precipitation. Vector area averaging of Fourier amplitude and phase produces useful information in a reasonably small number of harmonic dial plots, a procedure familiar from atmospheric tide research. The metrics cover most of the globe but down-weight high-latitude wintertime ocean areas where baroclinic waves are most prominent. This enables intercomparison of a large number of climate models with observations and with each other. The diurnal cycle of precipitation has features not encountered in typicalmore » climate model intercomparisons, notably the absence of meaningful “average model” results that can be displayed in a single two-dimensional map. Displaying one map per model guides development of the metrics proposed here by making it clear that land and ocean areas must be averaged separately, but interpreting maps from all models becomes problematic as the size of a multimodel ensemble increases. Global diurnal metrics provide quick comparisons with observations and among models, using the most recent version of the Coupled Model Intercomparison Project (CMIP). This includes, for the first time in CMIP, spatial resolutions comparable to global satellite observations. Finally, consistent with earlier studies of resolution versus parameterization of the diurnal cycle, the longstanding tendency of models to produce rainfall too early in the day persists in the high-resolution simulations, as expected if the error is due to subgrid-scale physics.« less

  1. Metrics for the Diurnal Cycle of Precipitation: Toward Routine Benchmarks for Climate Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Covey, Curt; Gleckler, Peter J.; Doutriaux, Charles

    In this paper, metrics are proposed—that is, a few summary statistics that condense large amounts of data from observations or model simulations—encapsulating the diurnal cycle of precipitation. Vector area averaging of Fourier amplitude and phase produces useful information in a reasonably small number of harmonic dial plots, a procedure familiar from atmospheric tide research. The metrics cover most of the globe but down-weight high-latitude wintertime ocean areas where baroclinic waves are most prominent. This enables intercomparison of a large number of climate models with observations and with each other. The diurnal cycle of precipitation has features not encountered in typicalmore » climate model intercomparisons, notably the absence of meaningful “average model” results that can be displayed in a single two-dimensional map. Displaying one map per model guides development of the metrics proposed here by making it clear that land and ocean areas must be averaged separately, but interpreting maps from all models becomes problematic as the size of a multimodel ensemble increases. Global diurnal metrics provide quick comparisons with observations and among models, using the most recent version of the Coupled Model Intercomparison Project (CMIP). This includes, for the first time in CMIP, spatial resolutions comparable to global satellite observations. Finally, consistent with earlier studies of resolution versus parameterization of the diurnal cycle, the longstanding tendency of models to produce rainfall too early in the day persists in the high-resolution simulations, as expected if the error is due to subgrid-scale physics.« less

  2. Cohen's Kappa and classification table metrics 2.0: An ArcView 3.x extension for accuracy assessment of spatially explicit models

    Treesearch

    Jeff Jenness; J. Judson Wynne

    2005-01-01

    In the field of spatially explicit modeling, well-developed accuracy assessment methodologies are often poorly applied. Deriving model accuracy metrics have been possible for decades, but these calculations were made by hand or with the use of a spreadsheet application. Accuracy assessments may be useful for: (1) ascertaining the quality of a model; (2) improving model...

  3. Up Periscope! Designing a New Perceptual Metric for Imaging System Performance

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    2016-01-01

    Modern electronic imaging systems include optics, sensors, sampling, noise, processing, compression, transmission and display elements, and are viewed by the human eye. Many of these elements cannot be assessed by traditional imaging system metrics such as the MTF. More complex metrics such as NVTherm do address these elements, but do so largely through parametric adjustment of an MTF-like metric. The parameters are adjusted through subjective testing of human observers identifying specific targets in a set of standard images. We have designed a new metric that is based on a model of human visual pattern classification. In contrast to previous metrics, ours simulates the human observer identifying the standard targets. One application of this metric is to quantify performance of modern electronic periscope systems on submarines.

  4. Directional Bias and Pheromone for Discovery and Coverage on Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fink, Glenn A.; Berenhaut, Kenneth S.; Oehmen, Christopher S.

    2012-09-11

    Natural multi-agent systems often rely on “correlated random walks” (random walks that are biased toward a current heading) to distribute their agents over a space (e.g., for foraging, search, etc.). Our contribution involves creation of a new movement and pheromone model that applies the concept of heading bias in random walks to a multi-agent, digital-ants system designed for cyber-security monitoring. We examine the relative performance effects of both pheromone and heading bias on speed of discovery of a target and search-area coverage in a two-dimensional network layout. We found that heading bias was unexpectedly helpful in reducing search time andmore » that it was more influential than pheromone for improving coverage. We conclude that while pheromone is very important for rapid discovery, heading bias can also greatly improve both performance metrics.« less

  5. Modeling Air Pollution Exposure Metrics for the Diabetes and Environment Panel Study (DEPS)

    EPA Science Inventory

    Air pollution health studies of fine particulate matter (PM) often use outdoor concentrations as exposure surrogates. To improve exposure assessments, we developed and evaluated an exposure model for individuals (EMI), which predicts five tiers of individual-level exposure metric...

  6. Model-based metrics of human-automation function allocation in complex work environments

    NASA Astrophysics Data System (ADS)

    Kim, So Young

    Function allocation is the design decision which assigns work functions to all agents in a team, both human and automated. Efforts to guide function allocation systematically has been studied in many fields such as engineering, human factors, team and organization design, management science, and cognitive systems engineering. Each field focuses on certain aspects of function allocation, but not all; thus, an independent discussion of each does not address all necessary issues with function allocation. Four distinctive perspectives emerged from a review of these fields: technology-centered, human-centered, team-oriented, and work-oriented. Each perspective focuses on different aspects of function allocation: capabilities and characteristics of agents (automation or human), team structure and processes, and work structure and the work environment. Together, these perspectives identify the following eight issues with function allocation: 1) Workload, 2) Incoherency in function allocations, 3) Mismatches between responsibility and authority, 4) Interruptive automation, 5) Automation boundary conditions, 6) Function allocation preventing human adaptation to context, 7) Function allocation destabilizing the humans' work environment, and 8) Mission Performance. Addressing these issues systematically requires formal models and simulations that include all necessary aspects of human-automation function allocation: the work environment, the dynamics inherent to the work, agents, and relationships among them. Also, addressing these issues requires not only a (static) model, but also a (dynamic) simulation that captures temporal aspects of work such as the timing of actions and their impact on the agent's work. Therefore, with properly modeled work as described by the work environment, the dynamics inherent to the work, agents, and relationships among them, a modeling framework developed by this thesis, which includes static work models and dynamic simulation, can capture the issues with function allocation. Then, based on the eight issues, eight types of metrics are established. The purpose of these metrics is to assess the extent to which each issue exists with a given function allocation. Specifically, the eight types of metrics assess workload, coherency of a function allocation, mismatches between responsibility and authority, interruptive automation, automation boundary conditions, human adaptation to context, stability of the human's work environment, and mission performance. Finally, to validate the modeling framework and the metrics, a case study was conducted modeling four different function allocations between a pilot and flight deck automation during the arrival and approach phases of flight. A range of pilot cognitive control modes and maximum human taskload limits were also included in the model. The metrics were assessed for these four function allocations and analyzed to validate capability of the metrics to identify important issues in given function allocations. In addition, the design insights provided by the metrics are highlighted. This thesis concludes with a discussion of mechanisms for further validating the modeling framework and function allocation metrics developed here, and highlights where these developments can be applied in research and in the design of function allocations in complex work environments such as aviation operations.

  7. Measures of model performance based on the log accuracy ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.

    Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less

  8. Measures of model performance based on the log accuracy ratio

    DOE PAGES

    Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.

    2018-01-03

    Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less

  9. Comparison of task-based exposure metrics for an epidemiologic study of isocyanate inhalation exposures among autobody shop workers.

    PubMed

    Woskie, Susan R; Bello, Dhimiter; Gore, Rebecca J; Stowe, Meredith H; Eisen, Ellen A; Liu, Youcheng; Sparer, Judy A; Redlich, Carrie A; Cullen, Mark R

    2008-09-01

    Because many occupational epidemiologic studies use exposure surrogates rather than quantitative exposure metrics, the UMass Lowell and Yale study of autobody shop workers provided an opportunity to evaluate the relative utility of surrogates and quantitative exposure metrics in an exposure response analysis of cross-week change in respiratory function. A task-based exposure assessment was used to develop several metrics of inhalation exposure to isocyanates. The metrics included the surrogates, job title, counts of spray painting events during the day, counts of spray and bystander exposure events, and a quantitative exposure metric that incorporated exposure determinant models based on task sampling and a personal workplace protection factor for respirator use, combined with a daily task checklist. The result of the quantitative exposure algorithm was an estimate of the daily time-weighted average respirator-corrected total NCO exposure (microg/m(3)). In general, these four metrics were found to be variable in agreement using measures such as weighted kappa and Spearman correlation. A logistic model for 10% drop in FEV(1) from Monday morning to Thursday morning was used to evaluate the utility of each exposure metric. The quantitative exposure metric was the most favorable, producing the best model fit, as well as the greatest strength and magnitude of association. This finding supports the reports of others that reducing exposure misclassification can improve risk estimates that otherwise would be biased toward the null. Although detailed and quantitative exposure assessment can be more time consuming and costly, it can improve exposure-disease evaluations and is more useful for risk assessment purposes. The task-based exposure modeling method successfully produced estimates of daily time-weighted average exposures in the complex and changing autobody shop work environment. The ambient TWA exposures of all of the office workers and technicians and 57% of the painters were found to be below the current U.K. Health and Safety Executive occupational exposure limit (OEL) for total NCO of 20 microg/m(3). When respirator use was incorporated, all personal daily exposures were below the U.K. OEL.

  10. Improving Lidar-based Aboveground Biomass Estimation with Site Productivity for Central Hardwood Forests, USA

    NASA Astrophysics Data System (ADS)

    Shao, G.; Gallion, J.; Fei, S.

    2016-12-01

    Sound forest aboveground biomass estimation is required to monitor diverse forest ecosystems and their impacts on the changing climate. Lidar-based regression models provided promised biomass estimations in most forest ecosystems. However, considerable uncertainties of biomass estimations have been reported in the temperate hardwood and hardwood-dominated mixed forests. Varied site productivities in temperate hardwood forests largely diversified height and diameter growth rates, which significantly reduced the correlation between tree height and diameter at breast height (DBH) in mature and complex forests. It is, therefore, difficult to utilize height-based lidar metrics to predict DBH-based field-measured biomass through a simple regression model regardless the variation of site productivity. In this study, we established a multi-dimension nonlinear regression model incorporating lidar metrics and site productivity classes derived from soil features. In the regression model, lidar metrics provided horizontal and vertical structural information and productivity classes differentiated good and poor forest sites. The selection and combination of lidar metrics were discussed. Multiple regression models were employed and compared. Uncertainty analysis was applied to the best fit model. The effects of site productivity on the lidar-based biomass model were addressed.

  11. Investigating the association between birth weight and complementary air pollution metrics: a cohort study.

    PubMed

    Laurent, Olivier; Wu, Jun; Li, Lianfa; Chung, Judith; Bartell, Scott

    2013-02-17

    Exposure to air pollution is frequently associated with reductions in birth weight but results of available studies vary widely, possibly in part because of differences in air pollution metrics. Further insight is needed to identify the air pollution metrics most strongly and consistently associated with birth weight. We used a hospital-based obstetric database of more than 70,000 births to study the relationships between air pollution and the risk of low birth weight (LBW, <2,500 g), as well as birth weight as a continuous variable, in term-born infants. Complementary metrics capturing different aspects of air pollution were used (measurements from ambient monitoring stations, predictions from land use regression models and from a Gaussian dispersion model, traffic density, and proximity to roads). Associations between air pollution metrics and birth outcomes were investigated using generalized additive models, adjusting for maternal age, parity, race/ethnicity, insurance status, poverty, gestational age and sex of the infants. Increased risks of LBW were associated with ambient O(3) concentrations as measured by monitoring stations, as well as traffic density and proximity to major roadways. LBW was not significantly associated with other air pollution metrics, except that a decreased risk was associated with ambient NO(2) concentrations as measured by monitoring stations. When birth weight was analyzed as a continuous variable, small increases in mean birth weight were associated with most air pollution metrics (<40 g per inter-quartile range in air pollution metrics). No such increase was observed for traffic density or proximity to major roadways, and a significant decrease in mean birth weight was associated with ambient O3 concentrations. We found contrasting results according to the different air pollution metrics examined. Unmeasured confounders and/or measurement errors might have produced spurious positive associations between birth weight and some air pollution metrics. Despite this, ambient O(3) was associated with a decrement in mean birth weight and significant increases in the risk of LBW were associated with traffic density, proximity to roads and ambient O(3). This suggests that in our study population, these air pollution metrics are more likely related to increased risks of LBW than the other metrics we studied. Further studies are necessary to assess the consistency of such patterns across populations.

  12. Investigating the association between birth weight and complementary air pollution metrics: a cohort study

    PubMed Central

    2013-01-01

    Background Exposure to air pollution is frequently associated with reductions in birth weight but results of available studies vary widely, possibly in part because of differences in air pollution metrics. Further insight is needed to identify the air pollution metrics most strongly and consistently associated with birth weight. Methods We used a hospital-based obstetric database of more than 70,000 births to study the relationships between air pollution and the risk of low birth weight (LBW, <2,500 g), as well as birth weight as a continuous variable, in term-born infants. Complementary metrics capturing different aspects of air pollution were used (measurements from ambient monitoring stations, predictions from land use regression models and from a Gaussian dispersion model, traffic density, and proximity to roads). Associations between air pollution metrics and birth outcomes were investigated using generalized additive models, adjusting for maternal age, parity, race/ethnicity, insurance status, poverty, gestational age and sex of the infants. Results Increased risks of LBW were associated with ambient O3 concentrations as measured by monitoring stations, as well as traffic density and proximity to major roadways. LBW was not significantly associated with other air pollution metrics, except that a decreased risk was associated with ambient NO2 concentrations as measured by monitoring stations. When birth weight was analyzed as a continuous variable, small increases in mean birth weight were associated with most air pollution metrics (<40 g per inter-quartile range in air pollution metrics). No such increase was observed for traffic density or proximity to major roadways, and a significant decrease in mean birth weight was associated with ambient O3 concentrations. Conclusions We found contrasting results according to the different air pollution metrics examined. Unmeasured confounders and/or measurement errors might have produced spurious positive associations between birth weight and some air pollution metrics. Despite this, ambient O3 was associated with a decrement in mean birth weight and significant increases in the risk of LBW were associated with traffic density, proximity to roads and ambient O3. This suggests that in our study population, these air pollution metrics are more likely related to increased risks of LBW than the other metrics we studied. Further studies are necessary to assess the consistency of such patterns across populations. PMID:23413962

  13. On the Emergent Constraints of Climate Sensitivity [On proposed emergent constraints of climate sensitivity

    DOE PAGES

    Qu, Xin; Hall, Alex; DeAngelis, Anthony M.; ...

    2018-01-11

    Differences among climate models in equilibrium climate sensitivity (ECS; the equilibrium surface temperature response to a doubling of atmospheric CO2) remain a significant barrier to the accurate assessment of societally important impacts of climate change. Relationships between ECS and observable metrics of the current climate in model ensembles, so-called emergent constraints, have been used to constrain ECS. Here a statistical method (including a backward selection process) is employed to achieve a better statistical understanding of the connections between four recently proposed emergent constraint metrics and individual feedbacks influencing ECS. The relationship between each metric and ECS is largely attributable tomore » a statistical connection with shortwave low cloud feedback, the leading cause of intermodel ECS spread. This result bolsters confidence in some of the metrics, which had assumed such a connection in the first place. Additional analysis is conducted with a few thousand artificial metrics that are randomly generated but are well correlated with ECS. The relationships between the contrived metrics and ECS can also be linked statistically to shortwave cloud feedback. Thus, any proposed or forthcoming ECS constraint based on the current generation of climate models should be viewed as a potential constraint on shortwave cloud feedback, and physical links with that feedback should be investigated to verify that the constraint is real. Additionally, any proposed ECS constraint should not be taken at face value since other factors influencing ECS besides shortwave cloud feedback could be systematically biased in the models.« less

  14. A causal examination of the effects of confounding factors on multimetric indices

    USGS Publications Warehouse

    Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Mitchell, Brian R.; Guntenspergen, Glenn R.

    2013-01-01

    The development of multimetric indices (MMIs) as a means of providing integrative measures of ecosystem condition is becoming widespread. An increasingly recognized problem for the interpretability of MMIs is controlling for the potentially confounding influences of environmental covariates. Most common approaches to handling covariates are based on simple notions of statistical control, leaving the causal implications of covariates and their adjustment unstated. In this paper, we use graphical models to examine some of the potential impacts of environmental covariates on the observed signals between human disturbance and potential response metrics. Using simulations based on various causal networks, we show how environmental covariates can both obscure and exaggerate the effects of human disturbance on individual metrics. We then examine from a causal interpretation standpoint the common practice of adjusting ecological metrics for environmental influences using only the set of sites deemed to be in reference condition. We present and examine the performance of an alternative approach to metric adjustment that uses the whole set of sites and models both environmental and human disturbance effects simultaneously. The findings from our analyses indicate that failing to model and adjust metrics can result in a systematic bias towards those metrics in which environmental covariates function to artificially strengthen the metric–disturbance relationship resulting in MMIs that do not accurately measure impacts of human disturbance. We also find that a “whole-set modeling approach” requires fewer assumptions and is more efficient with the given information than the more commonly applied “reference-set” approach.

  15. On the Emergent Constraints of Climate Sensitivity [On proposed emergent constraints of climate sensitivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qu, Xin; Hall, Alex; DeAngelis, Anthony M.

    Differences among climate models in equilibrium climate sensitivity (ECS; the equilibrium surface temperature response to a doubling of atmospheric CO2) remain a significant barrier to the accurate assessment of societally important impacts of climate change. Relationships between ECS and observable metrics of the current climate in model ensembles, so-called emergent constraints, have been used to constrain ECS. Here a statistical method (including a backward selection process) is employed to achieve a better statistical understanding of the connections between four recently proposed emergent constraint metrics and individual feedbacks influencing ECS. The relationship between each metric and ECS is largely attributable tomore » a statistical connection with shortwave low cloud feedback, the leading cause of intermodel ECS spread. This result bolsters confidence in some of the metrics, which had assumed such a connection in the first place. Additional analysis is conducted with a few thousand artificial metrics that are randomly generated but are well correlated with ECS. The relationships between the contrived metrics and ECS can also be linked statistically to shortwave cloud feedback. Thus, any proposed or forthcoming ECS constraint based on the current generation of climate models should be viewed as a potential constraint on shortwave cloud feedback, and physical links with that feedback should be investigated to verify that the constraint is real. Additionally, any proposed ECS constraint should not be taken at face value since other factors influencing ECS besides shortwave cloud feedback could be systematically biased in the models.« less

  16. Metric versus observable operator representation, higher spin models

    NASA Astrophysics Data System (ADS)

    Fring, Andreas; Frith, Thomas

    2018-02-01

    We elaborate further on the metric representation that is obtained by transferring the time-dependence from a Hermitian Hamiltonian to the metric operator in a related non-Hermitian system. We provide further insight into the procedure on how to employ the time-dependent Dyson relation and the quasi-Hermiticity relation to solve time-dependent Hermitian Hamiltonian systems. By solving both equations separately we argue here that it is in general easier to solve the former. We solve the mutually related time-dependent Schrödinger equation for a Hermitian and non-Hermitian spin 1/2, 1 and 3/2 model with time-independent and time-dependent metric, respectively. In all models the overdetermined coupled system of equations for the Dyson map can be decoupled algebraic manipulations and reduces to simple linear differential equations and an equation that can be converted into the non-linear Ermakov-Pinney equation.

  17. Automatic extraction and visualization of object-oriented software design metrics

    NASA Astrophysics Data System (ADS)

    Lakshminarayana, Anuradha; Newman, Timothy S.; Li, Wei; Talburt, John

    2000-02-01

    Software visualization is a graphical representation of software characteristics and behavior. Certain modes of software visualization can be useful in isolating problems and identifying unanticipated behavior. In this paper we present a new approach to aid understanding of object- oriented software through 3D visualization of software metrics that can be extracted from the design phase of software development. The focus of the paper is a metric extraction method and a new collection of glyphs for multi- dimensional metric visualization. Our approach utilize the extensibility interface of a popular CASE tool to access and automatically extract the metrics from Unified Modeling Language class diagrams. Following the extraction of the design metrics, 3D visualization of these metrics are generated for each class in the design, utilizing intuitively meaningful 3D glyphs that are representative of the ensemble of metrics. Extraction and visualization of design metrics can aid software developers in the early study and understanding of design complexity.

  18. Undiscovered locatable mineral resources in the Bay Resource Management Plan Area, Southwestern Alaska: A probabilistic assessment

    USGS Publications Warehouse

    Schmidt, J.M.; Light, T.D.; Drew, L.J.; Wilson, Frederic H.; Miller, M.L.; Saltus, R.W.

    2007-01-01

    The Bay Resource Management Plan (RMP) area in southwestern Alaska, north and northeast of Bristol Bay contains significant potential for undiscovered locatable mineral resources of base and precious metals, in addition to metallic mineral deposits that are already known. A quantitative probabilistic assessment has identified 24 tracts of land that are permissive for 17 mineral deposit model types likely to be explored for within the next 15 years in this region. Commodities we discuss in this report that have potential to occur in the Bay RMP area are Ag, Au, Cr, Cu, Fe, Hg, Mo, Pb, Sn, W, Zn, and platinum-group elements. Geoscience data for the region are sufficient to make quantitative estimates of the number of undiscovered deposits only for porphyry copper, epithermal vein, copper skarn, iron skarn, hot-spring mercury, placer gold, and placer platinum-deposit models. A description of a group of shallow- to intermediate-level intrusion-related gold deposits is combined with grade and tonnage data from 13 deposits of this type to provide a quantitative estimate of undiscovered deposits of this new type. We estimate that significant resources of Ag, Au, Cu, Fe, Hg, Mo, Pb, and Pt occur in the Bay Resource Management Plan area in these deposit types. At the 10th percentile probability level, the Bay RMP area is estimated to contain 10,067 metric tons silver, 1,485 metric tons gold, 12.66 million metric tons copper, 560 million metric tons iron, 8,100 metric tons mercury, 500,000 metric tons molybdenum, 150 metric tons lead, and 17 metric tons of platinum in undiscovered deposits of the eight quantified deposit types. At the 90th percentile probability level, the Bay RMP area is estimated to contain 89 metric tons silver, 14 metric tons gold, 911,215 metric tons copper, 330,000 metric tons iron, 1 metric ton mercury, 8,600 metric tons molybdenum and 1 metric ton platinum in undiscovered deposits of the eight deposit types. Other commodities, which may occur in the Bay RMP area, include Cr, Sn, W, Zn, and other platinum-group elements such as Ir, Os, and Pd. We define 13 permissive tracts for 9 additional deposit model types. These are: Besshi- and Cyprus, and Kuroko-volcanogenic massive sulfides, hot spring gold, low sulfide gold veins, Mississippi-Valley Pb-Zn, tin greisen, zinc skarn and Alaskan-type zoned ultramafic platinum-group element deposits. Resources in undiscovered deposits of these nine types have not been quantified, and would be in addition to those in known deposits and the undiscovered resources listed above. Additional mineral resources also may occur in the Bay RMP area in deposit types, which were not considered here.

  19. Candidate control design metrics for an agile fighter

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Bailey, Melvin L.; Ostroff, Aaron J.

    1991-01-01

    Success in the fighter combat environment of the future will certainly demand increasing capability from aircraft technology. These advanced capabilities in the form of superagility and supermaneuverability will require special design techniques which translate advanced air combat maneuvering requirements into design criteria. Control design metrics can provide some of these techniques for the control designer. Thus study presents an overview of control design metrics and investigates metrics for advanced fighter agility. The objectives of various metric users, such as airframe designers and pilots, are differentiated from the objectives of the control designer. Using an advanced fighter model, metric values are documented over a portion of the flight envelope through piloted simulation. These metric values provide a baseline against which future control system improvements can be compared and against which a control design methodology can be developed. Agility is measured for axial, pitch, and roll axes. Axial metrics highlight acceleration and deceleration capabilities under different flight loads and include specific excess power measurements to characterize energy meneuverability. Pitch metrics cover both body-axis and wind-axis pitch rates and accelerations. Included in pitch metrics are nose pointing metrics which highlight displacement capability between the nose and the velocity vector. Roll metrics (or torsion metrics) focus on rotational capability about the wind axis.

  20. Hyperkahler metrics on focus-focus fibrations

    NASA Astrophysics Data System (ADS)

    Zhao, Jie

    In this thesis, we focus on the study of hyperkahler metric in four dimensional cases, and practice GMN's construction of hyperkahler metric on focus-focus fibrations. We explicitly compute the action-angle coordinates on the local model of focus-focus fibration, and show its semi-global invariant should be harmonic to admit a compatible holomorphic 2-form. Then we study the canonical semi-flat metric on it. After the instanton correction inspired by physics, we get a family of the generalized Ooguri-Vafa metric on focus-focus fibrations, which becomes more local examples of explicit hyperkahler metric in four dimensional cases. In addition, we also make some exploration of the Ooguri-Vafa metric in the thesis. We study the potential function of the Ooguri-Vafa metric, and prove that its nodal set is a cylinder of bounded radius 1 < R < 1. As a result, we get that only on a finite neighborhood of the singular fibre the Ooguri-Vafa metric is a hyperkahler metric. Finally, we give some estimates for the diameter of the fibration under the Oogui-Vafa metric, which confirms that the Oogui-Vafa metric is not complete. The new family of metric constructed in the thesis, we think, will provide more examples to further study of Lagrangian fibrations and mirror symmetry in future.

  1. Anisotropic Bianchi Type-I and Type-II Bulk Viscous String Cosmological Models Coupled with Zero Mass Scalar Field

    NASA Astrophysics Data System (ADS)

    Venkateswarlu, R.; Sreenivas, K.

    2014-06-01

    The LRS Bianchi type-I and type-II string cosmological models are studied when the source for the energy momentum tensor is a bulk viscous stiff fluid containing one dimensional strings together with zero-mass scalar field. We have obtained the solutions of the field equations assuming a functional relationship between metric coefficients when the metric is Bianchi type-I and constant deceleration parameter in case of Bianchi type-II metric. The physical and kinematical properties of the models are discussed in each case. The effects of Viscosity on the physical and kinematical properties are also studied.

  2. f(T) gravity and energy distribution in Landau-Lifshitz prescription

    NASA Astrophysics Data System (ADS)

    Ganiou, M. G.; Houndjo, M. J. S.; Tossa, J.

    We investigate in this paper the Landau-Lifshitz energy distribution in the framework of f(T) theory view as a modified version of Teleparallel theory. From some important Teleparallel theory results on the localization of energy, our investigations generalize the Landau-Lifshitz prescription from the computation of the energy-momentum complex to the framework of f(T) gravity as it is done in the modified versions of General Relativity. We compute the energy density in the first step for three plane-symmetric metrics in vacuum. We find for the second metric that the energy density vanishes independently of f(T) models. We find that the Teleparallel Landau-Lifshitz energy-momentum complex formulations for these metrics are different from those obtained in General Relativity for the same metrics. Second, the calculations are performed for the cosmic string spacetime metric. It results that the energy distribution depends on the mass M and the radius r of cosmic string and it is strongly affected by the parameter of the considered quadratic and cubic f(T) models. Our investigation with this metric induces interesting results susceptible to be tested with some astrophysics hypothesis.

  3. A Linearized Model for Flicker and Contrast Thresholds at Various Retinal Illuminances

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Watson, Andrew

    2015-01-01

    We previously proposed a flicker visibility metric for bright displays, based on psychophysical data collected at a high mean luminance. Here we extend the metric to other mean luminances. This extension relies on a linear relation between log sensitivity and critical fusion frequency, and a linear relation between critical fusion frequency and log retina lilluminance. Consistent with our previous metric, the extended flicker visibility metric is measured in just-noticeable differences (JNDs).

  4. Algal bioassessment metrics for wadeable streams and rivers of Maine, USA

    USGS Publications Warehouse

    Danielson, Thomas J.; Loftin, Cynthia S.; Tsomides, Leonidas; DiFranco, Jeanne L.; Connors, Beth

    2011-01-01

    Many state water-quality agencies use biological assessment methods based on lotic fish and macroinvertebrate communities, but relatively few states have incorporated algal multimetric indices into monitoring programs. Algae are good indicators for monitoring water quality because they are sensitive to many environmental stressors. We evaluated benthic algal community attributes along a landuse gradient affecting wadeable streams and rivers in Maine, USA, to identify potential bioassessment metrics. We collected epilithic algal samples from 193 locations across the state. We computed weighted-average optima for common taxa for total P, total N, specific conductance, % impervious cover, and % developed watershed, which included all land use that is no longer forest or wetland. We assigned Maine stream tolerance values and categories (sensitive, intermediate, tolerant) to taxa based on their optima and responses to watershed disturbance. We evaluated performance of algal community metrics used in multimetric indices from other regions and novel metrics based on Maine data. Metrics specific to Maine data, such as the relative richness of species characterized as being sensitive in Maine, were more correlated with % developed watershed than most metrics used in other regions. Few community-structure attributes (e.g., species richness) were useful metrics in Maine. Performance of algal bioassessment models would be improved if metrics were evaluated with attributes of local data before inclusion in multimetric indices or statistical models. ?? 2011 by The North American Benthological Society.

  5. Attributing uncertainty in streamflow simulations due to variable inputs via the Quantile Flow Deviation metric

    NASA Astrophysics Data System (ADS)

    Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish

    2018-06-01

    Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.

  6. Metrics, models and data for assessment of resilience of urban infrastructure systems : final report

    DOT National Transportation Integrated Search

    2016-12-01

    This document is a summary of findings based on this research as presented in several conferences during the : course of the project. The research focused on identifying the basic metrics and models that can be used to : develop representations of pe...

  7. Numerical model validation using experimental data: Application of the area metric on a Francis runner

    NASA Astrophysics Data System (ADS)

    Chatenet, Q.; Tahan, A.; Gagnon, M.; Chamberland-Lauzon, J.

    2016-11-01

    Nowadays, engineers are able to solve complex equations thanks to the increase of computing capacity. Thus, finite elements software is widely used, especially in the field of mechanics to predict part behavior such as strain, stress and natural frequency. However, it can be difficult to determine how a model might be right or wrong, or whether a model is better than another one. Nevertheless, during the design phase, it is very important to estimate how the hydroelectric turbine blades will behave according to the stress to which it is subjected. Indeed, the static and dynamic stress levels will influence the blade's fatigue resistance and thus its lifetime, which is a significant feature. In the industry, engineers generally use either graphic representation, hypothesis tests such as the Student test, or linear regressions in order to compare experimental to estimated data from the numerical model. Due to the variability in personal interpretation (reproducibility), graphical validation is not considered objective. For an objective assessment, it is essential to use a robust validation metric to measure the conformity of predictions against data. We propose to use the area metric in the case of a turbine blade that meets the key points of the ASME Standards and produces a quantitative measure of agreement between simulations and empirical data. This validation metric excludes any belief and criterion of accepting a model which increases robustness. The present work is aimed at applying a validation method, according to ASME V&V 10 recommendations. Firstly, the area metric is applied on the case of a real Francis runner whose geometry and boundaries conditions are complex. Secondly, the area metric will be compared to classical regression methods to evaluate the performance of the method. Finally, we will discuss the use of the area metric as a tool to correct simulations.

  8. Evaluation of Science.

    PubMed

    Usmani, Adnan Mahmmood; Meo, Sultan Ayoub

    2011-01-01

    Scientific achievement by publishing a scientific manuscript in a peer reviewed biomedical journal is an important ingredient of research along with a career-enhancing advantages and significant amount of personal satisfaction. The road to evaluate science (research, scientific publications) among scientists often seems complicated. Scientist's career is generally summarized by the number of publications / citations, teaching the undergraduate, graduate and post-doctoral students, writing or reviewing grants and papers, preparing for and organizing meetings, participating in collaborations and conferences, advising colleagues, and serving on editorial boards of scientific journals. Scientists have been sizing up their colleagues since science began. Scientometricians have invented a wide variety of algorithms called science metrics to evaluate science. Many of the science metrics are even unknown to the everyday scientist. Unfortunately, there is no all-in-one metric. Each of them has its own strength, limitation and scope. Some of them are mistakenly applied to evaluate individuals, and each is surrounded by a cloud of variants designed to help them apply across different scientific fields or different career stages [1]. A suitable indicator should be chosen by considering the purpose of the evaluation, and how the results will be used. Scientific Evaluation assists us in: computing the research performance, comparison with peers, forecasting the growth, identifying the excellence in research, citation ranking, finding the influence of research, measuring the productivity, making policy decisions, securing funds for research and spotting trends. Key concepts in science metrics are output and impact. Evaluation of science is traditionally expressed in terms of citation counts. Although most of the science metrics are based on citation counts but two most commonly used are impact factor [2] and h-index [3].

  9. What then do we do about computer security?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suppona, Roger A.; Mayo, Jackson R.; Davis, Christopher Edward

    This report presents the answers that an informal and unfunded group at SNL provided for questions concerning computer security posed by Jim Gosler, Sandia Fellow (00002). The primary purpose of this report is to record our current answers; hopefully those answers will turn out to be answers indeed. The group was formed in November 2010. In November 2010 Jim Gosler, Sandia Fellow, asked several of us several pointed questions about computer security metrics. Never mind that some of the best minds in the field have been trying to crack this nut without success for decades. Jim asked Campbell to leadmore » an informal and unfunded group to answer the questions. With time Jim invited several more Sandians to join in. We met a number of times both with Jim and without him. At Jim's direction we contacted a number of people outside Sandia who Jim thought could help. For example, we interacted with IBM's T.J. Watson Research Center and held a one-day, videoconference workshop with them on the questions.« less

  10. Modeling Imperfect Generator Behavior in Power System Operation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krad, Ibrahim

    A key component in power system operations is the use of computer models to quickly study and analyze different operating conditions and futures in an efficient manner. The output of these models are sensitive to the data used in them as well as the assumptions made during their execution. One typical assumption is that generators and load assets perfectly follow operator control signals. While this is a valid simulation assumption, generators may not always accurately follow control signals. This imperfect response of generators could impact cost and reliability metrics. This paper proposes a generator model that capture this imperfect behaviormore » and examines its impact on production costs and reliability metrics using a steady-state power system operations model. Preliminary analysis shows that while costs remain relatively unchanged, there could be significant impacts on reliability metrics.« less

  11. On the security of consumer wearable devices in the Internet of Things.

    PubMed

    Tahir, Hasan; Tahir, Ruhma; McDonald-Maier, Klaus

    2018-01-01

    Miniaturization of computer hardware and the demand for network capable devices has resulted in the emergence of a new class of technology called wearable computing. Wearable devices have many purposes like lifestyle support, health monitoring, fitness monitoring, entertainment, industrial uses, and gaming. Wearable devices are hurriedly being marketed in an attempt to capture an emerging market. Owing to this, some devices do not adequately address the need for security. To enable virtualization and connectivity wearable devices sense and transmit data, therefore it is essential that the device, its data and the user are protected. In this paper the use of novel Integrated Circuit Metric (ICMetric) technology for the provision of security in wearable devices has been suggested. ICMetric technology uses the features of a device to generate an identification which is then used for the provision of cryptographic services. This paper explores how a device ICMetric can be generated by using the accelerometer and gyroscope sensor. Since wearable devices often operate in a group setting the work also focuses on generating a group identification which is then used to deliver services like authentication, confidentiality, secure admission and symmetric key generation. Experiment and simulation results prove that the scheme offers high levels of security without compromising on resource demands.

  12. On the security of consumer wearable devices in the Internet of Things

    PubMed Central

    Tahir, Hasan; Tahir, Ruhma; McDonald-Maier, Klaus

    2018-01-01

    Miniaturization of computer hardware and the demand for network capable devices has resulted in the emergence of a new class of technology called wearable computing. Wearable devices have many purposes like lifestyle support, health monitoring, fitness monitoring, entertainment, industrial uses, and gaming. Wearable devices are hurriedly being marketed in an attempt to capture an emerging market. Owing to this, some devices do not adequately address the need for security. To enable virtualization and connectivity wearable devices sense and transmit data, therefore it is essential that the device, its data and the user are protected. In this paper the use of novel Integrated Circuit Metric (ICMetric) technology for the provision of security in wearable devices has been suggested. ICMetric technology uses the features of a device to generate an identification which is then used for the provision of cryptographic services. This paper explores how a device ICMetric can be generated by using the accelerometer and gyroscope sensor. Since wearable devices often operate in a group setting the work also focuses on generating a group identification which is then used to deliver services like authentication, confidentiality, secure admission and symmetric key generation. Experiment and simulation results prove that the scheme offers high levels of security without compromising on resource demands. PMID:29668756

  13. Temporal Delineation and Quantification of Short Term Clustered Mining Seismicity

    NASA Astrophysics Data System (ADS)

    Woodward, Kyle; Wesseloo, Johan; Potvin, Yves

    2017-07-01

    The assessment of the temporal characteristics of seismicity is fundamental to understanding and quantifying the seismic hazard associated with mining, the effectiveness of strategies and tactics used to manage seismic hazard, and the relationship between seismicity and changes to the mining environment. This article aims to improve the accuracy and precision in which the temporal dimension of seismic responses can be quantified and delineated. We present a review and discussion on the occurrence of time-dependent mining seismicity with a specific focus on temporal modelling and the modified Omori law (MOL). This forms the basis for the development of a simple weighted metric that allows for the consistent temporal delineation and quantification of a seismic response. The optimisation of this metric allows for the selection of the most appropriate modelling interval given the temporal attributes of time-dependent mining seismicity. We evaluate the performance weighted metric for the modelling of a synthetic seismic dataset. This assessment shows that seismic responses can be quantified and delineated by the MOL, with reasonable accuracy and precision, when the modelling is optimised by evaluating the weighted MLE metric. Furthermore, this assessment highlights that decreased weighted MLE metric performance can be expected if there is a lack of contrast between the temporal characteristics of events associated with different processes.

  14. Temporal Variability of Daily Personal Magnetic Field Exposure Metrics in Pregnant Women

    PubMed Central

    Lewis, Ryan C.; Evenson, Kelly R.; Savitz, David A.; Meeker, John D.

    2015-01-01

    Recent epidemiology studies of power-frequency magnetic fields and reproductive health have characterized exposures using data collected from personal exposure monitors over a single day, possibly resulting in exposure misclassification due to temporal variability in daily personal magnetic field exposure metrics, but relevant data in adults are limited. We assessed the temporal variability of daily central tendency (time-weighted average, median) and peak (upper percentiles, maximum) personal magnetic field exposure metrics over seven consecutive days in 100 pregnant women. When exposure was modeled as a continuous variable, central tendency metrics had substantial reliability, whereas peak metrics had fair (maximum) to moderate (upper percentiles) reliability. The predictive ability of a single day metric to accurately classify participants into exposure categories based on a weeklong metric depended on the selected exposure threshold, with sensitivity decreasing with increasing exposure threshold. Consistent with the continuous measures analysis, sensitivity was higher for central tendency metrics than for peak metrics. If there is interest in peak metrics, more than one day of measurement is needed over the window of disease susceptibility to minimize measurement error, but one day may be sufficient for central tendency metrics. PMID:24691007

  15. Fiber Based Optical Amplifier for High Energy Laser Pulses Final Report CRADA No. TC02100.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messerly, M.; Cunningham, P.

    This was a collaborative effort between Lawrence Livermore National Security, LLC (formerly The Regents of the University of California)/Lawrence Livermore National Laboratory (LLNL), and The Boeing Company to develop an optical fiber-based laser amplifier capable of producing and sustaining very high-energy, nanosecond-scale optical pulses. The overall technical objective of this CRADA was to research, design, and develop an optical fiber-based amplifier that would meet specific metrics.

  16. Faded Colors: From the Homeland Security Advisory System (HSAS) to the National Terrorism Advisory System (NTAS)

    DTIC Science & Technology

    2013-03-01

    members of Class 0703–0704 (the Potato -Heads), of which I have many memories from our time spent in Shepherdstown, your status will always remain...have been issued during this time , no metrics are available to measure its effectiveness . The federal government, state and local governments, private...Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction

  17. Energy Dependence: The $1.4 Trillion Addiction Threatening National Security

    DTIC Science & Technology

    2010-02-01

    particularly when driving range is not a crucial consideration. 34 This comparison is a metric referred to as the gallon of gasoline equivalent ( GGE ...Currently, retail hydrogen costs between $2.10 and $9.10 GGE . As HFCVs penetrate the market, projected retail cost estimates for hydrogen fuel...range from $1.75 to $4.25 per GGE .35 The HFCV provides size, power, and range capability comparable to the ICE. It also “achieves two times the

  18. SMART: Security Measurements and Assuring Reliability Through Metrics Technology

    DTIC Science & Technology

    2009-11-01

    analyzing C / C ++ programs. C ++ is a terrible language for tool vendors to handle. There are only a handful of people in the world capable of writing an...input sanitizing , etc. These features aid in porting httpd to new platforms. For systems written in C / C ++, it appears that the use of preprocessor...DoD office) • DISTRIBUTION STATEMENT C . Distribution authorized to U.S. Government Agencies and their contractors (fill in reason) (date of

  19. U.S. Navy Bloodhounds: Establishing A New Maritime Security Combatant

    DTIC Science & Technology

    2016-06-01

    did=787381. 85 Source: USCG Public Affairs Office, “Coast Guard Interdicts 2 Pangas, 12,000 Pounds of Marijuana ,” United States Coast Guard Newsroom...available-Coast-Guard-interdicts-2-pangas-12-000-pounds-of- marijuana . 24 contributing to over 335,000 drug-related deaths in the same 10-year span.86...91 This advocacy is important because TCOs evade traditional land based borders, smuggling metric tons of marijuana and cocaine via maritime routes

  20. Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.

    PubMed

    Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui

    2018-03-01

    Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.

  1. Comparison of continuous versus categorical tumor measurement-based metrics to predict overall survival in cancer treatment trials.

    PubMed

    An, Ming-Wen; Mandrekar, Sumithra J; Branda, Megan E; Hillman, Shauna L; Adjei, Alex A; Pitot, Henry C; Goldberg, Richard M; Sargent, Daniel J

    2011-10-15

    The categorical definition of response assessed via the Response Evaluation Criteria in Solid Tumors has documented limitations. We sought to identify alternative metrics for tumor response that improve prediction of overall survival. Individual patient data from three North Central Cancer Treatment Group trials (N0026, n = 117; N9741, n = 1,109; and N9841, n = 332) were used. Continuous metrics of tumor size based on longitudinal tumor measurements were considered in addition to a trichotomized response [TriTR: response (complete or partial) vs. stable disease vs. progression). Cox proportional hazards models, adjusted for treatment arm and baseline tumor burden, were used to assess the impact of the metrics on subsequent overall survival, using a landmark analysis approach at 12, 16, and 24 weeks postbaseline. Model discrimination was evaluated by the concordance (c) index. The overall best response rates for the three trials were 26%, 45%, and 25%, respectively. Although nearly all metrics were statistically significantly associated with overall survival at the different landmark time points, the concordance indices (c-index) for the traditional response metrics ranged from 0.59 to 0.65; for the continuous metrics from 0.60 to 0.66; and for the TriTR metrics from 0.64 to 0.69. The c-indices for TriTR at 12 weeks were comparable with those at 16 and 24 weeks. Continuous tumor measurement-based metrics provided no predictive improvement over traditional response-based metrics or TriTR; TriTR had better predictive ability than best TriTR or confirmed response. If confirmed, TriTR represents a promising endpoint for future phase II trials. ©2011 AACR.

  2. Comparison of continuous versus categorical tumor measurement-based metrics to predict overall survival in cancer treatment trials

    PubMed Central

    An, Ming-Wen; Mandrekar, Sumithra J.; Branda, Megan E.; Hillman, Shauna L.; Adjei, Alex A.; Pitot, Henry; Goldberg, Richard M.; Sargent, Daniel J.

    2011-01-01

    Purpose The categorical definition of response assessed via the Response Evaluation Criteria in Solid Tumors has documented limitations. We sought to identify alternative metrics for tumor response that improve prediction of overall survival. Experimental Design Individual patient data from three North Central Cancer Treatment Group trials (N0026, n=117; N9741, n=1109; N9841, n=332) were used. Continuous metrics of tumor size based on longitudinal tumor measurements were considered in addition to a trichotomized response (TriTR: Response vs. Stable vs. Progression). Cox proportional hazards models, adjusted for treatment arm and baseline tumor burden, were used to assess the impact of the metrics on subsequent overall survival, using a landmark analysis approach at 12-, 16- and 24-weeks post baseline. Model discrimination was evaluated using the concordance (c) index. Results The overall best response rates for the three trials were 26%, 45%, and 25% respectively. While nearly all metrics were statistically significantly associated with overall survival at the different landmark time points, the c-indices for the traditional response metrics ranged from 0.59-0.65; for the continuous metrics from 0.60-0.66 and for the TriTR metrics from 0.64-0.69. The c-indices for TriTR at 12-weeks were comparable to those at 16- and 24-weeks. Conclusions Continuous tumor-measurement-based metrics provided no predictive improvement over traditional response based metrics or TriTR; TriTR had better predictive ability than best TriTR or confirmed response. If confirmed, TriTR represents a promising endpoint for future Phase II trials. PMID:21880789

  3. Simultaneously Mitigating Near-Term Climate Change and Improving Human Health and Food Security

    NASA Astrophysics Data System (ADS)

    Shindell, Drew; Kuylenstierna, Johan C. I.; Vignati, Elisabetta; van Dingenen, Rita; Amann, Markus; Klimont, Zbigniew; Anenberg, Susan C.; Muller, Nicholas; Janssens-Maenhout, Greet; Raes, Frank; Schwartz, Joel; Faluvegi, Greg; Pozzoli, Luca; Kupiainen, Kaarle; Höglund-Isaksson, Lena; Emberson, Lisa; Streets, David; Ramanathan, V.; Hicks, Kevin; Oanh, N. T. Kim; Milly, George; Williams, Martin; Demkine, Volodymyr; Fowler, David

    2012-01-01

    Tropospheric ozone and black carbon (BC) contribute to both degraded air quality and global warming. We considered ~400 emission control measures to reduce these pollutants by using current technology and experience. We identified 14 measures targeting methane and BC emissions that reduce projected global mean warming ~0.5°C by 2050. This strategy avoids 0.7 to 4.7 million annual premature deaths from outdoor air pollution and increases annual crop yields by 30 to 135 million metric tons due to ozone reductions in 2030 and beyond. Benefits of methane emissions reductions are valued at $700 to $5000 per metric ton, which is well above typical marginal abatement costs (less than $250). The selected controls target different sources and influence climate on shorter time scales than those of carbon dioxide-reduction measures. Implementing both substantially reduces the risks of crossing the 2°C threshold.

  4. A City and National Metric measuring Isolation from the Global Market for Food Security Assessment

    NASA Technical Reports Server (NTRS)

    Brown, Molly E.; Silver, Kirk Coleman; Rajagopalan, Krishnan

    2013-01-01

    The World Bank has invested in infrastructure in developing countries for decades. This investment aims to reduce the isolation of markets, reducing both seasonality and variability in food availability and food prices. Here we combine city market price data, global distance to port, and country infrastructure data to create a new Isolation Index for countries and cities around the world. Our index quantifies the isolation of a city from the global market. We demonstrate that an index built at the country level can be applied at a sub-national level to quantify city isolation. In doing so, we offer policy makers with an alternative metric to assess food insecurity. We compare our isolation index with other indices and economic data found in the literature.We show that our Index measures economic isolation regardless of economic stability using correlation and analysis

  5. Yield Mapping for Different Crops in Sudano-Sahelian Smallholder Farming Systems: Results Based on Metric Worldview and Decametric SPOT-5 Take5 Time Series

    NASA Astrophysics Data System (ADS)

    Blaes, X.; Lambert, M.-J.; Chome, G.; Traore, P. S.; de By, R. A.; Defourny, P.

    2016-08-01

    Efficient yield mapping in Sudano-Sahelian Africa, characterized by a very heterogeneous landscape, is crucial to help ensure food security and decrease smallholder farmers' vulnerability. Thanks to an unprecedented in-situ data and HR and VHR remote sensing time series collected in the Koutiala district (in south-eastern Mali), the yield and some key factors of yield estimation were estimated. A crop-specific biomass map was derived with a mean absolute error of 20% using metric WorldView and 25% using decametric SPOT-5 TAKE5 image time series. The very high intra- and inter-field heterogeneity was captured efficiently. The presence of trees in the fields led to a general overestimation of yields, while the mixed pixels at the field borders introduced noise in the biomass predictions.

  6. MODEL SIMULATION STUDIES OF SCALE-DEPENDENT GAIN IN STREAM NUTRIENT ASSIMILATIVE CAPACITY RESULTING FROM IMPROVING NUTRIENT RETENTION METRICS

    EPA Science Inventory

    Considering the difficulty in measuring restoration success for nonpoint source pollutants, nutrient assimilative capacity (NAS) offers an attractive systems-based metric. Here NAS was defined using an impulse-response model of nitrate fate and transport. Eleven parameters were e...

  7. Stress in Harmonic Serialism

    ERIC Educational Resources Information Center

    Pruitt, Kathryn Ringler

    2012-01-01

    This dissertation proposes a model of word stress in a derivational version of Optimality Theory (OT) called Harmonic Serialism (HS; Prince and Smolensky 1993/2004, McCarthy 2000, 2006, 2010a). In this model, the metrical structure of a word is derived through a series of optimizations in which the "best" metrical foot is chosen…

  8. Interpreting lateral dynamic weight shifts using a simple inverted pendulum model.

    PubMed

    Kennedy, Michael W; Bretl, Timothy; Schmiedeler, James P

    2014-01-01

    Seventy-five young, healthy adults completed a lateral weight-shifting activity in which each shifted his/her center of pressure (CoP) to visually displayed target locations with the aid of visual CoP feedback. Each subject's CoP data were modeled using a single-link inverted pendulum system with a spring-damper at the joint. This extends the simple inverted pendulum model of static balance in the sagittal plane to lateral weight-shifting balance. The model controlled pendulum angle using PD control and a ramp setpoint trajectory, and weight-shifting was characterized by both shift speed and a non-minimum phase (NMP) behavior metric. This NMP behavior metric examines the force magnitude at shift initiation and provides weight-shifting balance performance information that parallels the examination of peak ground reaction forces in gait analysis. Control parameters were optimized on a subject-by-subject basis to match balance metrics for modeled results to metric values calculated from experimental data. Overall, the model matches experimental data well (average percent error of 0.35% for shifting speed and 0.05% for NMP behavior). These results suggest that the single-link inverted pendulum model can be used effectively to capture lateral weight-shifting balance, as it has been shown to model static balance. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Metrics for assessing the performance of morphodynamic models of braided rivers at event and reach scales

    NASA Astrophysics Data System (ADS)

    Williams, Richard; Measures, Richard; Hicks, Murray; Brasington, James

    2017-04-01

    Advances in geomatics technologies have transformed the monitoring of reach-scale (100-101 km) river morphodynamics. Hyperscale Digital Elevation Models (DEMs) can now be acquired at temporal intervals that are commensurate with the frequencies of high-flow events that force morphological change. The low vertical errors associated with such DEMs enable DEMs of Difference (DoDs) to be generated to quantify patterns of erosion and deposition, and derive sediment budgets using the morphological approach. In parallel with reach-scale observational advances, high-resolution, two-dimensional, physics-based numerical morphodynamic models are now computationally feasible for unsteady, reach-scale simulations. In light of this observational and predictive progress, there is a need to identify appropriate metrics that can be extracted from DEMs and DoDs to assess model performance. Nowhere is this more pertinent than in braided river environments, where numerous mobile channels that intertwine around mid-channel bars result in complex patterns of erosion and deposition, thus making model assessment particularly challenging. This paper identifies and evaluates a range of morphological and morphological-change metrics that can be used to assess predictions of braided river morphodynamics at the timescale of single storm events. A depth-averaged, mixed-grainsize Delft3D morphodynamic model was used to simulate morphological change during four discrete high-flow events, ranging from 91 to 403 m3s-1, along a 2.5 x 0.7 km reach of the braided, gravel-bed Rees River, New Zealand. Pre- and post-event topographic surveys, using a fusion of Terrestrial Laser Scanning and optical-empirical bathymetric mapping, were used to produce 0.5 m resolution DEMs and DoDs. The pre- and post-event DEMs for a moderate (227m3s-1) high-flow event were used to calibrate the model. DEMs and DoDs from the other three high-flow events were used for model assessment using two approaches. First, "morphological" metrics were applied to compare observed and predicted post-event DEMs. These metrics include measures of confluence and bifurcation node density, bar shape, braiding intensity, and topographic comparisons using a form of the Brier Skill Score and cumulative frequency distributions of rugosity. Second, "morphological change" metrics were used to compare observed and predicted morphological change. These metrics included the extent of the morphologically active area, pairwise comparisons of morphological change (using kappa and fuzzy kappa statistics), and comparisons between vertical morphological change magnitude and elevation distribution. Results indicate that those metrics that assess characteristic features of braiding, rather than making direct comparisons, are most useful for assessing reach-scale braided river morphodynamic models. Together, the metrics indicate that there was a general affinity between observed and predicted braided river morphodynamics, both during small and large magnitude high-flow events. These results thus demonstrate how high-resolution, reach-scale, natural experiment datasets can be used to assess the efficacy of morphological models in predicting realistic patterns of erosion and deposition. This lays the foundation for the development and assessment of decadal scale morphodynamic models and their use in adaptive river basin management.

  10. Comparison of spirometry and abdominal height as four-dimensional computed tomography metrics in lung

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu Wei; Low, Daniel A.; Parikh, Parag J.

    2005-07-15

    An important consideration in four-dimensional CT scanning is the selection of a breathing metric for sorting the CT data and modeling internal motion. This study compared two noninvasive breathing metrics, spirometry and abdominal height, against internal air content, used as a surrogate for internal motion. Both metrics were shown to be accurate, but the spirometry showed a stronger and more reproducible relationship than the abdominal height in the lung. The abdominal height was known to be affected by sensor placement and patient positioning while the spirometer exhibited signal drift. By combining these two, a normalization of the drift-free metric tomore » tidal volume may be generated and the overall metric precision may be improved.« less

  11. Predicting streamflow regime metrics for ungauged streamsin Colorado, Washington, and Oregon

    NASA Astrophysics Data System (ADS)

    Sanborn, Stephen C.; Bledsoe, Brian P.

    2006-06-01

    Streamflow prediction in ungauged basins provides essential information for water resources planning and management and ecohydrological studies yet remains a fundamental challenge to the hydrological sciences. A methodology is presented for stratifying streamflow regimes of gauged locations, classifying the regimes of ungauged streams, and developing models for predicting a suite of ecologically pertinent streamflow metrics for these streams. Eighty-four streamflow metrics characterizing various flow regime attributes were computed along with physical and climatic drainage basin characteristics for 150 streams with little or no streamflow modification in Colorado, Washington, and Oregon. The diverse hydroclimatology of the study area necessitates flow regime stratification and geographically independent clusters were identified and used to develop separate predictive models for each flow regime type. Multiple regression models for flow magnitude, timing, and rate of change metrics were quite accurate with many adjusted R2 values exceeding 0.80, while models describing streamflow variability did not perform as well. Separate stratification schemes for high, low, and average flows did not considerably improve models for metrics describing those particular aspects of the regime over a scheme based on the entire flow regime. Models for streams identified as 'snowmelt' type were improved if sites in Colorado and the Pacific Northwest were separated to better stratify the processes driving streamflow in these regions thus revealing limitations of geographically independent streamflow clusters. This study demonstrates that a broad suite of ecologically relevant streamflow characteristics can be accurately modeled across large heterogeneous regions using this framework. Applications of the resulting models include stratifying biomonitoring sites and quantifying linkages between specific aspects of flow regimes and aquatic community structure. In particular, the results bode well for modeling ecological processes related to high-flow magnitude, timing, and rate of change such as the recruitment of fish and riparian vegetation across large regions.

  12. A Classification Scheme for Smart Manufacturing Systems’ Performance Metrics

    PubMed Central

    Lee, Y. Tina; Kumaraguru, Senthilkumaran; Jain, Sanjay; Robinson, Stefanie; Helu, Moneer; Hatim, Qais Y.; Rachuri, Sudarsan; Dornfeld, David; Saldana, Christopher J.; Kumara, Soundar

    2017-01-01

    This paper proposes a classification scheme for performance metrics for smart manufacturing systems. The discussion focuses on three such metrics: agility, asset utilization, and sustainability. For each of these metrics, we discuss classification themes, which we then use to develop a generalized classification scheme. In addition to the themes, we discuss a conceptual model that may form the basis for the information necessary for performance evaluations. Finally, we present future challenges in developing robust, performance-measurement systems for real-time, data-intensive enterprises. PMID:28785744

  13. Word Maturity: A New Metric for Word Knowledge

    ERIC Educational Resources Information Center

    Landauer, Thomas K.; Kireyev, Kirill; Panaccione, Charles

    2011-01-01

    A new metric, Word Maturity, estimates the development by individual students of knowledge of every word in a large corpus. The metric is constructed by Latent Semantic Analysis modeling of word knowledge as a function of the reading that a simulated learner has done and is calibrated by its developing closeness in information content to that of a…

  14. Metrics, The Measure of Your Future: Evaluation Report, 1977.

    ERIC Educational Resources Information Center

    North Carolina State Dept. of Public Instruction, Raleigh. Div. of Development.

    The primary goal of the Metric Education Project was the systematic development of a replicable educational model to facilitate the system-wide conversion to the metric system during the next five to ten years. This document is an evaluation of that project. Three sets of statistical evidence exist to support the fact that the project has been…

  15. Effect of thematic map misclassification on landscape multi-metric assessment.

    PubMed

    Kleindl, William J; Powell, Scott L; Hauer, F Richard

    2015-06-01

    Advancements in remote sensing and computational tools have increased our awareness of large-scale environmental problems, thereby creating a need for monitoring, assessment, and management at these scales. Over the last decade, several watershed and regional multi-metric indices have been developed to assist decision-makers with planning actions of these scales. However, these tools use remote-sensing products that are subject to land-cover misclassification, and these errors are rarely incorporated in the assessment results. Here, we examined the sensitivity of a landscape-scale multi-metric index (MMI) to error from thematic land-cover misclassification and the implications of this uncertainty for resource management decisions. Through a case study, we used a simplified floodplain MMI assessment tool, whose metrics were derived from Landsat thematic maps, to initially provide results that were naive to thematic misclassification error. Using a Monte Carlo simulation model, we then incorporated map misclassification error into our MMI, resulting in four important conclusions: (1) each metric had a different sensitivity to error; (2) within each metric, the bias between the error-naive metric scores and simulated scores that incorporate potential error varied in magnitude and direction depending on the underlying land cover at each assessment site; (3) collectively, when the metrics were combined into a multi-metric index, the effects were attenuated; and (4) the index bias indicated that our naive assessment model may overestimate floodplain condition of sites with limited human impacts and, to a lesser extent, either over- or underestimated floodplain condition of sites with mixed land use.

  16. Developing an ANSI standard for image quality tools for the testing of active millimeter wave imaging systems

    NASA Astrophysics Data System (ADS)

    Barber, Jeffrey; Greca, Joseph; Yam, Kevin; Weatherall, James C.; Smith, Peter R.; Smith, Barry T.

    2017-05-01

    In 2016, the millimeter wave (MMW) imaging community initiated the formation of a standard for millimeter wave image quality metrics. This new standard, American National Standards Institute (ANSI) N42.59, will apply to active MMW systems for security screening of humans. The Electromagnetic Signatures of Explosives Laboratory at the Transportation Security Laboratory is supporting the ANSI standards process via the creation of initial prototypes for round-robin testing with MMW imaging system manufacturers and experts. Results obtained for these prototypes will be used to inform the community and lead to consensus objective standards amongst stakeholders. Images collected with laboratory systems are presented along with results of preliminary image analysis. Future directions for object design, data collection and image processing are discussed.

  17. The correlation of metrics in complex networks with applications in functional brain networks

    NASA Astrophysics Data System (ADS)

    Li, C.; Wang, H.; de Haan, W.; Stam, C. J.; Van Mieghem, P.

    2011-11-01

    An increasing number of network metrics have been applied in network analysis. If metric relations were known better, we could more effectively characterize networks by a small set of metrics to discover the association between network properties/metrics and network functioning. In this paper, we investigate the linear correlation coefficients between widely studied network metrics in three network models (Bárabasi-Albert graphs, Erdös-Rényi random graphs and Watts-Strogatz small-world graphs) as well as in functional brain networks of healthy subjects. The metric correlations, which we have observed and theoretically explained, motivate us to propose a small representative set of metrics by including only one metric from each subset of mutually strongly dependent metrics. The following contributions are considered important. (a) A network with a given degree distribution can indeed be characterized by a small representative set of metrics. (b) Unweighted networks, which are obtained from weighted functional brain networks with a fixed threshold, and Erdös-Rényi random graphs follow a similar degree distribution. Moreover, their metric correlations and the resultant representative metrics are similar as well. This verifies the influence of degree distribution on metric correlations. (c) Most metric correlations can be explained analytically. (d) Interestingly, the most studied metrics so far, the average shortest path length and the clustering coefficient, are strongly correlated and, thus, redundant. Whereas spectral metrics, though only studied recently in the context of complex networks, seem to be essential in network characterizations. This representative set of metrics tends to both sufficiently and effectively characterize networks with a given degree distribution. In the study of a specific network, however, we have to at least consider the representative set so that important network properties will not be neglected.

  18. Context and meter enhance long-range planning in music performance

    PubMed Central

    Mathias, Brian; Pfordresher, Peter Q.; Palmer, Caroline

    2015-01-01

    Neural responses demonstrate evidence of resonance, or oscillation, during the production of periodic auditory events. Music contains periodic auditory events that give rise to a sense of beat, which in turn generates a sense of meter on the basis of multiple periodicities. Metrical hierarchies may aid memory for music by facilitating similarity-based associations among sequence events at different periodic distances that unfold in longer contexts. A fundamental question is how metrical associations arising from a musical context influence memory during music performance. Longer contexts may facilitate metrical associations at higher hierarchical levels more than shorter contexts, a prediction of the range model, a formal model of planning processes in music performance (Palmer and Pfordresher, 2003; Pfordresher et al., 2007). Serial ordering errors, in which intended sequence events are produced in incorrect sequence positions, were measured as skilled pianists performed musical pieces that contained excerpts embedded in long or short musical contexts. Pitch errors arose from metrically similar positions and further sequential distances more often when the excerpt was embedded in long contexts compared to short contexts. Musicians’ keystroke intensities and error rates also revealed influences of metrical hierarchies, which differed for performances in long and short contexts. The range model accounted for contextual effects and provided better fits to empirical findings when metrical associations between sequence events were included. Longer sequence contexts may facilitate planning during sequence production by increasing conceptual similarity between hierarchically associated events. These findings are consistent with the notion that neural oscillations at multiple periodicities may strengthen metrical associations across sequence events during planning. PMID:25628550

  19. Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics

    PubMed Central

    Nguyen, THT; Mouksassi, M‐S; Holford, N; Al‐Huniti, N; Freedman, I; Hooker, AC; John, J; Karlsson, MO; Mould, DR; Pérez Ruixo, JJ; Plan, EL; Savic, R; van Hasselt, JGC; Weber, B; Zhou, C; Comets, E

    2017-01-01

    This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used. PMID:27884052

  20. Normal tissue complication probability (NTCP) modelling using spatial dose metrics and machine learning methods for severe acute oral mucositis resulting from head and neck radiotherapy.

    PubMed

    Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L

    2016-07-01

    Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.

  1. Visual salience metrics for image inpainting

    NASA Astrophysics Data System (ADS)

    Ardis, Paul A.; Singhal, Amit

    2009-01-01

    Quantitative metrics for successful image inpainting currently do not exist, with researchers instead relying upon qualitative human comparisons to evaluate their methodologies and techniques. In an attempt to rectify this situation, we propose two new metrics to capture the notions of noticeability and visual intent in order to evaluate inpainting results. The proposed metrics use a quantitative measure of visual salience based upon a computational model of human visual attention. We demonstrate how these two metrics repeatably correlate with qualitative opinion in a human observer study, correctly identify the optimum uses for exemplar-based inpainting (as specified in the original publication), and match qualitative opinion in published examples.

  2. Interaction Metrics for Feedback Control of Sound Radiation from Stiffened Panels

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph H.; Cox, David E.; Gibbs, Gary P.

    2003-01-01

    Interaction metrics developed for the process control industry are used to evaluate decentralized control of sound radiation from bays on an aircraft fuselage. The metrics are applied to experimentally measured frequency response data from a model of an aircraft fuselage. The purpose is to understand how coupling between multiple bays of the fuselage can destabilize or limit the performance of a decentralized active noise control system. The metrics quantitatively verify observations from a previous experiment, in which decentralized controllers performed worse than centralized controllers. The metrics do not appear to be useful for explaining control spillover which was observed in a previous experiment.

  3. Fusion set selection with surrogate metric in multi-atlas based image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Tingting; Ruan, Dan

    2016-02-01

    Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.

  4. Weapons proliferation and organized crime: The Russian military and security force dimension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turbiville, G.H.

    One dimension of international security of the post-Cold War era that has not received enough attention is how organized crime facilitates weapons proliferation worldwide. The former Soviet Union (FSU) has emerged as the world`s greatest counterproliferation challenge. It contains the best developed links among organized crime, military and security organizations, and weapons proliferation. Furthermore, Russian military and security forces are the principle source of arms becoming available to organized crime groups, participants in regional conflict, and corrupt state officials engaged in the black, gray, and legal arms markets in their various dimensions. The flourishing illegal trade in conventional weapons ismore » the clearest and most tangible manifestation of the close links between Russian power ministries and criminal organizations. The magnitude of the WMD proliferation problem from the FSU is less clear and less tangible. There have been many open reports of small-scale fissile material smuggling out of the FSU. The situation with regard to the proliferation of chemical weapon usually receives less attention but may be more serious. With an acknowledged stockpile of 40,000 metric tons of chemical agents, the potential for proliferation is enormous.« less

  5. Distance Metric Learning via Iterated Support Vector Machines.

    PubMed

    Zuo, Wangmeng; Wang, Faqiang; Zhang, David; Lin, Liang; Huang, Yuchi; Meng, Deyu; Zhang, Lei

    2017-07-11

    Distance metric learning aims to learn from the given training data a valid distance metric, with which the similarity between data samples can be more effectively evaluated for classification. Metric learning is often formulated as a convex or nonconvex optimization problem, while most existing methods are based on customized optimizers and become inefficient for large scale problems. In this paper, we formulate metric learning as a kernel classification problem with the positive semi-definite constraint, and solve it by iterated training of support vector machines (SVMs). The new formulation is easy to implement and efficient in training with the off-the-shelf SVM solvers. Two novel metric learning models, namely Positive-semidefinite Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the global optimality of their solutions. Experiments are conducted on general classification, face verification and person re-identification to evaluate our methods. Compared with the state-of-the-art approaches, our methods can achieve comparable classification accuracy and are efficient in training.

  6. Applying Risk and Resilience Metrics to Energy Investments

    DTIC Science & Technology

    2015-12-01

    the model as a positive aspect, though the user can easily devalue risk and resiliency while increasing the value of the cost and policy categories to... policy or position of the Department of Defense or the U.S. Government. IRB Protocol number ____N/A____. 12a. DISTRIBUTION / AVAILABILITY STATEMENT...decision making model. The model developed for this project includes cost metrics and policy mandates that the current model considers and adds the

  7. Perceived assessment metrics for visible and infrared color fused image quality without reference image

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao

    2015-02-01

    Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.

  8. Visualizing curved spacetime

    NASA Astrophysics Data System (ADS)

    Jonsson, Rickard M.

    2005-03-01

    I present a way to visualize the concept of curved spacetime. The result is a curved surface with local coordinate systems (Minkowski systems) living on it, giving the local directions of space and time. Relative to these systems, special relativity holds. The method can be used to visualize gravitational time dilation, the horizon of black holes, and cosmological models. The idea underlying the illustrations is first to specify a field of timelike four-velocities uμ. Then, at every point, one performs a coordinate transformation to a local Minkowski system comoving with the given four-velocity. In the local system, the sign of the spatial part of the metric is flipped to create a new metric of Euclidean signature. The new positive definite metric, called the absolute metric, can be covariantly related to the original Lorentzian metric. For the special case of a two-dimensional original metric, the absolute metric may be embedded in three-dimensional Euclidean space as a curved surface.

  9. Regime-based evaluation of cloudiness in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin

    2017-01-01

    The concept of cloud regimes (CRs) is used to develop a framework for evaluating the cloudiness of 12 fifth Coupled Model Intercomparison Project (CMIP5) models. Reference CRs come from existing global International Satellite Cloud Climatology Project (ISCCP) weather states. The evaluation is made possible by the implementation in several CMIP5 models of the ISCCP simulator generating in each grid cell daily joint histograms of cloud optical thickness and cloud top pressure. Model performance is assessed with several metrics such as CR global cloud fraction (CF), CR relative frequency of occurrence (RFO), their product [long-term average total cloud amount (TCA)], cross-correlations of CR RFO maps, and a metric of resemblance between model and ISCCP CRs. In terms of CR global RFO, arguably the most fundamental metric, the models perform unsatisfactorily overall, except for CRs representing thick storm clouds. Because model CR CF is internally constrained by our method, RFO discrepancies yield also substantial TCA errors. Our results support previous findings that CMIP5 models underestimate cloudiness. The multi-model mean performs well in matching observed RFO maps for many CRs, but is still not the best for this or other metrics. When overall performance across all CRs is assessed, some models, despite shortcomings, apparently outperform Moderate Resolution Imaging Spectroradiometer cloud observations evaluated against ISCCP like another model output. Lastly, contrasting cloud simulation performance against each model's equilibrium climate sensitivity in order to gain insight on whether good cloud simulation pairs with particular values of this parameter, yields no clear conclusions.

  10. Simplified process model discovery based on role-oriented genetic mining.

    PubMed

    Zhao, Weidong; Liu, Xi; Dai, Weihui

    2014-01-01

    Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.

  11. A Comparison of Linking and Concurrent Calibration under the Graded Response Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    Applications of item response theory to practical testing problems including equating, differential item functioning, and computerized adaptive testing, require that item parameter estimates be placed onto a common metric. In this study, two methods for developing a common metric for the graded response model under item response theory were…

  12. A review of surface energy balance models for estimating actual evapotranspiration with remote sensing at high spatiotemporal resolution over large extents

    USGS Publications Warehouse

    McShane, Ryan R.; Driscoll, Katelyn P.; Sando, Roy

    2017-09-27

    Many approaches have been developed for measuring or estimating actual evapotranspiration (ETa), and research over many years has led to the development of remote sensing methods that are reliably reproducible and effective in estimating ETa. Several remote sensing methods can be used to estimate ETa at the high spatial resolution of agricultural fields and the large extent of river basins. More complex remote sensing methods apply an analytical approach to ETa estimation using physically based models of varied complexity that require a combination of ground-based and remote sensing data, and are grounded in the theory behind the surface energy balance model. This report, funded through cooperation with the International Joint Commission, provides an overview of selected remote sensing methods used for estimating water consumed through ETa and focuses on Mapping Evapotranspiration at High Resolution with Internalized Calibration (METRIC) and Operational Simplified Surface Energy Balance (SSEBop), two energy balance models for estimating ETa that are currently applied successfully in the United States. The METRIC model can produce maps of ETa at high spatial resolution (30 meters using Landsat data) for specific areas smaller than several hundred square kilometers in extent, an improvement in practice over methods used more generally at larger scales. Many studies validating METRIC estimates of ETa against measurements from lysimeters have shown model accuracies on daily to seasonal time scales ranging from 85 to 95 percent. The METRIC model is accurate, but the greater complexity of METRIC results in greater data requirements, and the internalized calibration of METRIC leads to greater skill required for implementation. In contrast, SSEBop is a simpler model, having reduced data requirements and greater ease of implementation without a substantial loss of accuracy in estimating ETa. The SSEBop model has been used to produce maps of ETa over very large extents (the conterminous United States) using lower spatial resolution (1 kilometer) Moderate Resolution Imaging Spectroradiometer (MODIS) data. Model accuracies ranging from 80 to 95 percent on daily to annual time scales have been shown in numerous studies that validated ETa estimates from SSEBop against eddy covariance measurements. The METRIC and SSEBop models can incorporate low and high spatial resolution data from MODIS and Landsat, but the high spatiotemporal resolution of ETa estimates using Landsat data over large extents takes immense computing power. Cloud computing is providing an opportunity for processing an increasing amount of geospatial “big data” in a decreasing period of time. For example, Google Earth EngineTM has been used to implement METRIC with automated calibration for regional-scale estimates of ETa using Landsat data. The U.S. Geological Survey also is using Google Earth EngineTM to implement SSEBop for estimating ETa in the United States at a continental scale using Landsat data.

  13. Neutron star merger GW170817 strongly constrains doubly coupled bigravity

    NASA Astrophysics Data System (ADS)

    Akrami, Yashar; Brax, Philippe; Davis, Anne-Christine; Vardanyan, Valeri

    2018-06-01

    We study the implications of the recent detection of gravitational waves emitted by a pair of merging neutron stars and their electromagnetic counterpart, events GW170817 and GRB170817A, on the viability of the doubly coupled bimetric models of cosmic evolution, where the two metrics couple directly to matter through a composite, effective metric. We demonstrate that the bounds on the speed of gravitational waves place strong constraints on the doubly coupled models, forcing either the two metrics to be proportional at the background level or the models to become singly coupled. Proportional backgrounds are particularly interesting as they provide stable cosmological solutions with phenomenologies equivalent to that of Λ CDM at the background level as well as for linear perturbations, while nonlinearities are expected to show deviations from the standard model.

  14. Relating forest attributes with area- and tree-based light detection and ranging metrics for western Oregon

    Treesearch

    Michael E. Goerndt; Vincente J. Monleon; Hailemariam. Temesgen

    2010-01-01

    Three sets of linear models were developed to predict several forest attributes, using stand-level and single-tree remote sensing (STRS) light detection and ranging (LiDAR) metrics as predictor variables. The first used only area-level metrics (ALM) associated with first-return height distribution, percentage of cover, and canopy transparency. The second alternative...

  15. ACME, a GIS tool for Automated Cirque Metric Extraction

    NASA Astrophysics Data System (ADS)

    Spagnolo, Matteo; Pellitero, Ramon; Barr, Iestyn D.; Ely, Jeremy C.; Pellicer, Xavier M.; Rea, Brice R.

    2017-02-01

    Regional scale studies of glacial cirque metrics provide key insights on the (palaeo) environment related to the formation of these erosional landforms. The growing availability of high resolution terrain models means that more glacial cirques can be identified and mapped in the future. However, the extraction of their metrics still largely relies on time consuming manual techniques or the combination of, more or less obsolete, GIS tools. In this paper, a newly coded toolbox is provided for the automated, and comparatively quick, extraction of 16 key glacial cirque metrics; including length, width, circularity, planar and 3D area, elevation, slope, aspect, plan closure and hypsometry. The set of tools, named ACME (Automated Cirque Metric Extraction), is coded in Python, runs in one of the most commonly used GIS packages (ArcGIS) and has a user friendly interface. A polygon layer of mapped cirques is required for all metrics, while a Digital Terrain Model and a point layer of cirque threshold midpoints are needed to run some of the tools. Results from ACME are comparable to those from other techniques and can be obtained rapidly, allowing large cirque datasets to be analysed and potentially important regional trends highlighted.

  16. Auralization of NASA N+2 Aircraft Concepts from System Noise Predictions

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Burley, Casey L.; Thomas, Russel H.

    2016-01-01

    Auralization of aircraft flyover noise provides an auditory experience that complements integrated metrics obtained from system noise predictions. Recent efforts have focused on auralization methods development, specifically the process by which source noise information obtained from semi-empirical models, computational aeroacoustic analyses, and wind tunnel and flight test data, are used for simulated flyover noise at a receiver on the ground. The primary focus of this work, however, is to develop full vehicle auralizations in order to explore the distinguishing features of NASA's N+2 aircraft vis-à-vis current fleet reference vehicles for single-aisle and large twin-aisle classes. Some features can be seen in metric time histories associated with aircraft noise certification, e.g., tone-corrected perceived noise level used in the calculation of effective perceived noise level. Other features can be observed in sound quality metrics, e.g., loudness, sharpness, roughness, fluctuation strength and tone-to-noise ratio. A psychoacoustic annoyance model is employed to establish the relationship between sound quality metrics and noise certification metrics. Finally, the auralizations will serve as the basis for a separate psychoacoustic study aimed at assessing how well aircraft noise certification metrics predict human annoyance for these advanced vehicle concepts.

  17. Predicting the natural flow regime: Models for assessing hydrological alteration in streams

    USGS Publications Warehouse

    Carlisle, D.M.; Falcone, J.; Wolock, D.M.; Meador, M.R.; Norris, R.H.

    2009-01-01

    Understanding the extent to which natural streamflow characteristics have been altered is an important consideration for ecological assessments of streams. Assessing hydrologic condition requires that we quantify the attributes of the flow regime that would be expected in the absence of anthropogenic modifications. The objective of this study was to evaluate whether selected streamflow characteristics could be predicted at regional and national scales using geospatial data. Long-term, gaged river basins distributed throughout the contiguous US that had streamflow characteristics representing least disturbed or near pristine conditions were identified. Thirteen metrics of the magnitude, frequency, duration, timing and rate of change of streamflow were calculated using a 20-50 year period of record for each site. We used random forests (RF), a robust statistical modelling approach, to develop models that predicted the value for each streamflow metric using natural watershed characteristics. We compared the performance (i.e. bias and precision) of national- and regional-scale predictive models to that of models based on landscape classifications, including major river basins, ecoregions and hydrologic landscape regions (HLR). For all hydrologic metrics, landscape stratification models produced estimates that were less biased and more precise than a null model that accounted for no natural variability. Predictive models at the national and regional scale performed equally well, and substantially improved predictions of all hydrologic metrics relative to landscape stratification models. Prediction error rates ranged from 15 to 40%, but were 25% for most metrics. We selected three gaged, non-reference sites to illustrate how predictive models could be used to assess hydrologic condition. These examples show how the models accurately estimate predisturbance conditions and are sensitive to changes in streamflow variability associated with long-term land-use change. We also demonstrate how the models can be applied to predict expected natural flow characteristics at ungaged sites. ?? 2009 John Wiley & Sons, Ltd.

  18. Orientation estimation of anatomical structures in medical images for object recognition

    NASA Astrophysics Data System (ADS)

    Bağci, Ulaş; Udupa, Jayaram K.; Chen, Xinjian

    2011-03-01

    Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.

  19. Testing General Relativity with the Reflection Spectrum of the Supermassive Black Hole in 1H0707-495.

    PubMed

    Cao, Zheng; Nampalliwar, Sourabh; Bambi, Cosimo; Dauser, Thomas; García, Javier A

    2018-02-02

    Recently, we have extended the x-ray reflection model relxill to test the spacetime metric in the strong gravitational field of astrophysical black holes. In the present Letter, we employ this extended model to analyze XMM-Newton, NuSTAR, and Swift data of the supermassive black hole in 1H0707-495 and test deviations from a Kerr metric parametrized by the Johannsen deformation parameter α_{13}. Our results are consistent with the hypothesis that the spacetime metric around the black hole in 1H0707-495 is described by the Kerr solution.

  20. Inventory and transport of plastic debris in the Laurentian Great Lakes.

    PubMed

    Hoffman, Matthew J; Hittinger, Eric

    2017-02-15

    Plastic pollution in the world's oceans has received much attention, but there has been increasing concern about the high concentrations of plastic debris in the Laurentian Great Lakes. Using census data and methodologies used to study ocean debris we derive a first estimate of 9887 metric tonnes per year of plastic debris entering the Great Lakes. These estimates are translated into population-dependent particle inputs which are advected using currents from a hydrodynamic model to map the spatial distribution of plastic debris in the Great Lakes. Model results compare favorably with previously published sampling data. The samples are used to calibrate the model to derive surface microplastic mass estimates of 0.0211 metric tonnes in Lake Superior, 1.44 metric tonnes in Huron, and 4.41 metric tonnes in Erie. These results have many applications, including informing cleanup efforts, helping target pollution prevention, and understanding the inter-state or international flows of plastic pollution. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. A novel rotational invariants target recognition method for rotating motion blurred images

    NASA Astrophysics Data System (ADS)

    Lan, Jinhui; Gong, Meiling; Dong, Mingwei; Zeng, Yiliang; Zhang, Yuzhen

    2017-11-01

    The imaging of the image sensor is blurred due to the rotational motion of the carrier and reducing the target recognition rate greatly. Although the traditional mode that restores the image first and then identifies the target can improve the recognition rate, it takes a long time to recognize. In order to solve this problem, a rotating fuzzy invariants extracted model was constructed that recognizes target directly. The model includes three metric layers. The object description capability of metric algorithms that contain gray value statistical algorithm, improved round projection transformation algorithm and rotation-convolution moment invariants in the three metric layers ranges from low to high, and the metric layer with the lowest description ability among them is as the input which can eliminate non pixel points of target region from degenerate image gradually. Experimental results show that the proposed model can improve the correct target recognition rate of blurred image and optimum allocation between the computational complexity and function of region.

  2. Nonstate Security Threats in Africa: Challenges for U.S. Engagement

    DTIC Science & Technology

    2010-12-01

    PRISM 2, no. 1 FeatuReS | 67 East Asia has led to a spike in elephant poach - ing. In June 2002, Singapore seized 6.2 metric tons of ivory (worth...country with no functioning cen- tral government since the 1980s.23 In addition to the impact on the lives and families of sailors who are taken...imports by 2015,25 and militant attacks impacting West African oil already affect the price of oil on global stock markets. The role played by

  3. Lakenheath, United Kingdom. Revised Uniform Summary of Surface Weather Observations (RUSSWO). Parts A-F.

    DTIC Science & Technology

    1984-03-01

    035831625 N 52 24 E 000 34 ELEV 32 FT EGUL PARTS A-F HOURS 5UM04RIZEDs OOOOZ - 230OZ 06 aR IERIOD OF RECORD: HOURLY OBSERVATIONSt JUN 73 - MAY 83 SIM4&R! OF...Summary of Surface Weather Observations (RUSSWO)- Lakenheath- Final rept. United Kingdom. S. PERFORMING ORG. REPORT NUMBER 7. AUTHOR( e ) S. CONTRACT...Ceiling Versus Visibility; Sky Cover; ( E ) Psychr metric SECURITY CLASSIFICATION OF THIS PAGE(When Date Entered) 19. Percentqge frequency of distribution

  4. Lidar aboveground vegetation biomass estimates in shrublands: Prediction, uncertainties and application to coarser scales

    USGS Publications Warehouse

    Li, Aihua; Dhakal, Shital; Glenn, Nancy F.; Spaete, Luke P.; Shinneman, Douglas; Pilliod, David S.; Arkle, Robert; McIlroy, Susan

    2017-01-01

    Our study objectives were to model the aboveground biomass in a xeric shrub-steppe landscape with airborne light detection and ranging (Lidar) and explore the uncertainty associated with the models we created. We incorporated vegetation vertical structure information obtained from Lidar with ground-measured biomass data, allowing us to scale shrub biomass from small field sites (1 m subplots and 1 ha plots) to a larger landscape. A series of airborne Lidar-derived vegetation metrics were trained and linked with the field-measured biomass in Random Forests (RF) regression models. A Stepwise Multiple Regression (SMR) model was also explored as a comparison. Our results demonstrated that the important predictors from Lidar-derived metrics had a strong correlation with field-measured biomass in the RF regression models with a pseudo R2 of 0.76 and RMSE of 125 g/m2 for shrub biomass and a pseudo R2 of 0.74 and RMSE of 141 g/m2 for total biomass, and a weak correlation with field-measured herbaceous biomass. The SMR results were similar but slightly better than RF, explaining 77–79% of the variance, with RMSE ranging from 120 to 129 g/m2 for shrub and total biomass, respectively. We further explored the computational efficiency and relative accuracies of using point cloud and raster Lidar metrics at different resolutions (1 m to 1 ha). Metrics derived from the Lidar point cloud processing led to improved biomass estimates at nearly all resolutions in comparison to raster-derived Lidar metrics. Only at 1 m were the results from the point cloud and raster products nearly equivalent. The best Lidar prediction models of biomass at the plot-level (1 ha) were achieved when Lidar metrics were derived from an average of fine resolution (1 m) metrics to minimize boundary effects and to smooth variability. Overall, both RF and SMR methods explained more than 74% of the variance in biomass, with the most important Lidar variables being associated with vegetation structure and statistical measures of this structure (e.g., standard deviation of height was a strong predictor of biomass). Using our model results, we developed spatially-explicit Lidar estimates of total and shrub biomass across our study site in the Great Basin, U.S.A., for monitoring and planning in this imperiled ecosystem.

  5. SU-D-207B-07: Development of a CT-Radiomics Based Early Response Prediction Model During Delivery of Chemoradiation Therapy for Pancreatic Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klawikowski, S; Christian, J; Schott, D

    Purpose: Pilot study developing a CT-texture based model for early assessment of treatment response during the delivery of chemoradiation therapy (CRT) for pancreatic cancer. Methods: Daily CT data acquired for 24 pancreatic head cancer patients using CT-on-rails, during the routine CT-guided CRT delivery with a radiation dose of 50.4 Gy in 28 fractions, were analyzed. The pancreas head was contoured on each daily CT. Texture analysis was performed within the pancreas head contour using a research tool (IBEX). Over 1300 texture metrics including: grey level co-occurrence, run-length, histogram, neighborhood intensity difference, and geometrical shape features were calculated for each dailymore » CT. Metric-trend information was established by finding the best fit of either a linear, quadratic, or exponential function for each metric value verses accumulated dose. Thus all the daily CT texture information was consolidated into a best-fit trend type for a given patient and texture metric. Linear correlation was performed between the patient histological response vector (good, medium, poor) and all combinations of 23 patient subgroups (statistical jackknife) determining which metrics were most correlated to response and repeatedly reliable across most patients. Control correlations against CT scanner, reconstruction kernel, and gated/nongated CT images were also calculated. Euclidean distance measure was used to group/sort patient vectors based on the data of these trend-response metrics. Results: We found four specific trend-metrics (Gray Level Coocurence Matrix311-1InverseDiffMomentNorm, Gray Level Coocurence Matrix311-1InverseDiffNorm, Gray Level Coocurence Matrix311-1 Homogeneity2, and Intensity Direct Local StdMean) that were highly correlated with patient response and repeatedly reliable. Our four trend-metric model successfully ordered our pilot response dataset (p=0.00070). We found no significant correlation to our control parameters: gating (p=0.7717), scanner (p=0.9741), and kernel (p=0.8586). Conclusion: We have successfully created a CT-texture based early treatment response prediction model using the CTs acquired during the delivery of chemoradiation therapy for pancreatic cancer. Future testing is required to validate the model with more patient data.« less

  6. Measuring distance “as the horse runs”: Cross-scale comparison of terrain-based metrics

    USGS Publications Warehouse

    Buttenfield, Barbara P.; Ghandehari, M; Leyk, S; Stanislawski, Larry V.; Brantley, M E; Qiang, Yi

    2016-01-01

    Distance metrics play significant roles in spatial modeling tasks, such as flood inundation (Tucker and Hancock 2010), stream extraction (Stanislawski et al. 2015), power line routing (Kiessling et al. 2003) and analysis of surface pollutants such as nitrogen (Harms et al. 2009). Avalanche risk is based on slope, aspect, and curvature, all directly computed from distance metrics (Gutiérrez 2012). Distance metrics anchor variogram analysis, kernel estimation, and spatial interpolation (Cressie 1993). Several approaches are employed to measure distance. Planar metrics measure straight line distance between two points (“as the crow flies”) and are simple and intuitive, but suffer from uncertainties. Planar metrics assume that Digital Elevation Model (DEM) pixels are rigid and flat, as tiny facets of ceramic tile approximating a continuous terrain surface. In truth, terrain can bend, twist and undulate within each pixel.Work with Light Detection and Ranging (lidar) data or High Resolution Topography to achieve precise measurements present challenges, as filtering can eliminate or distort significant features (Passalacqua et al. 2015). The current availability of lidar data is far from comprehensive in developed nations, and non-existent in many rural and undeveloped regions. Notwithstanding computational advances, distance estimation on DEMs has never been systematically assessed, due to assumptions that improvements are so small that surface adjustment is unwarranted. For individual pixels inaccuracies may be small, but additive effects can propagate dramatically, especially in regional models (e.g., disaster evacuation) or global models (e.g., sea level rise) where pixels span dozens to hundreds of kilometers (Usery et al 2003). Such models are increasingly common, lending compelling reasons to understand shortcomings in the use of planar distance metrics. Researchers have studied curvature-based terrain modeling. Jenny et al. (2011) use curvature to generate hierarchical terrain models. Schneider (2001) creates a ‘plausibility’ metric for DEM-extracted structure lines. d’Oleire- Oltmanns et al. (2014) adopt object-based image processing as an alternative to working with DEMs; acknowledging the pre-processing involved in converting terrain into an object model is computationally intensive, and likely infeasible for some applications.This paper compares planar distance with surface adjusted distance, evolving from distance “as the crow flies” to distance “as the horse runs”. Several methods are compared for DEMs spanning a range of resolutions for the study area and validated against a 3 meter (m) lidar data benchmark. Error magnitudes vary with pixel size and with the method of surface adjustment. The rate of error increase may also vary with landscape type (terrain roughness, precipitation regimes and land settlement patterns). Cross-scale analysis for a single study area is reported here. Additional areas will be presented at the conference.

  7. Continuous theory of active matter systems with metric-free interactions.

    PubMed

    Peshkov, Anton; Ngo, Sandrine; Bertin, Eric; Chaté, Hugues; Ginelli, Francesco

    2012-08-31

    We derive a hydrodynamic description of metric-free active matter: starting from self-propelled particles aligning with neighbors defined by "topological" rules, not metric zones-a situation advocated recently to be relevant for bird flocks, fish schools, and crowds-we use a kinetic approach to obtain well-controlled nonlinear field equations. We show that the density-independent collision rate per particle characteristic of topological interactions suppresses the linear instability of the homogeneous ordered phase and the nonlinear density segregation generically present near threshold in metric models, in agreement with microscopic simulations.

  8. Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M.; Rearden, Bradley T.

    This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.

  9. Modeling Mediterranean forest structure using airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Bottalico, Francesca; Chirici, Gherardo; Giannini, Raffaello; Mele, Salvatore; Mura, Matteo; Puxeddu, Michele; McRoberts, Ronald E.; Valbuena, Ruben; Travaglini, Davide

    2017-05-01

    The conservation of biological diversity is recognized as a fundamental component of sustainable development, and forests contribute greatly to its preservation. Structural complexity increases the potential biological diversity of a forest by creating multiple niches that can host a wide variety of species. To facilitate greater understanding of the contributions of forest structure to forest biological diversity, we modeled relationships between 14 forest structure variables and airborne laser scanning (ALS) data for two Italian study areas representing two common Mediterranean forests, conifer plantations and coppice oaks subjected to irregular intervals of unplanned and non-standard silvicultural interventions. The objectives were twofold: (i) to compare model prediction accuracies when using two types of ALS metrics, echo-based metrics and canopy height model (CHM)-based metrics, and (ii) to construct inferences in the form of confidence intervals for large area structural complexity parameters. Our results showed that the effects of the two study areas on accuracies were greater than the effects of the two types of ALS metrics. In particular, accuracies were less for the more complex study area in terms of species composition and forest structure. However, accuracies achieved using the echo-based metrics were only slightly greater than when using the CHM-based metrics, thus demonstrating that both options yield reliable and comparable results. Accuracies were greatest for dominant height (Hd) (R2 = 0.91; RMSE% = 8.2%) and mean height weighted by basal area (R2 = 0.83; RMSE% = 10.5%) when using the echo-based metrics, 99th percentile of the echo height distribution and interquantile distance. For the forested area, the generalized regression (GREG) estimate of mean Hd was similar to the simple random sampling (SRS) estimate, 15.5 m for GREG and 16.2 m SRS. Further, the GREG estimator with standard error of 0.10 m was considerable more precise than the SRS estimator with standard error of 0.69 m.

  10. Scale-dependent complementarity of climatic velocity and environmental diversity for identifying priority areas for conservation under climate change.

    PubMed

    Carroll, Carlos; Roberts, David R; Michalak, Julia L; Lawler, Joshua J; Nielsen, Scott E; Stralberg, Diana; Hamann, Andreas; Mcrae, Brad H; Wang, Tongli

    2017-11-01

    As most regions of the earth transition to altered climatic conditions, new methods are needed to identify refugia and other areas whose conservation would facilitate persistence of biodiversity under climate change. We compared several common approaches to conservation planning focused on climate resilience over a broad range of ecological settings across North America and evaluated how commonalities in the priority areas identified by different methods varied with regional context and spatial scale. Our results indicate that priority areas based on different environmental diversity metrics differed substantially from each other and from priorities based on spatiotemporal metrics such as climatic velocity. Refugia identified by diversity or velocity metrics were not strongly associated with the current protected area system, suggesting the need for additional conservation measures including protection of refugia. Despite the inherent uncertainties in predicting future climate, we found that variation among climatic velocities derived from different general circulation models and emissions pathways was less than the variation among the suite of environmental diversity metrics. To address uncertainty created by this variation, planners can combine priorities identified by alternative metrics at a single resolution and downweight areas of high variation between metrics. Alternately, coarse-resolution velocity metrics can be combined with fine-resolution diversity metrics in order to leverage the respective strengths of the two groups of metrics as tools for identification of potential macro- and microrefugia that in combination maximize both transient and long-term resilience to climate change. Planners should compare and integrate approaches that span a range of model complexity and spatial scale to match the range of ecological and physical processes influencing persistence of biodiversity and identify a conservation network resilient to threats operating at multiple scales. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  11. Secure Communications in CIoT Networks with a Wireless Energy Harvesting Untrusted Relay

    PubMed Central

    Hu, Hequn; Liao, Xuewen

    2017-01-01

    The Internet of Things (IoT) represents a bright prospect that a variety of common appliances can connect to one another, as well as with the rest of the Internet, to vastly improve our lives. Unique communication and security challenges have been brought out by the limited hardware, low-complexity, and severe energy constraints of IoT devices. In addition, a severe spectrum scarcity problem has also been stimulated by the use of a large number of IoT devices. In this paper, cognitive IoT (CIoT) is considered where an IoT network works as the secondary system using underlay spectrum sharing. A wireless energy harvesting (EH) node is used as a relay to improve the coverage of an IoT device. However, the relay could be a potential eavesdropper to intercept the IoT device’s messages. This paper considers the problem of secure communication between the IoT device (e.g., sensor) and a destination (e.g., controller) via the wireless EH untrusted relay. Since the destination can be equipped with adequate energy supply, secure schemes based on destination-aided jamming are proposed based on power splitting (PS) and time splitting (TS) policies, called intuitive secure schemes based on PS (Int-PS), precoded secure scheme based on PS (Pre-PS), intuitive secure scheme based on TS (Int-TS) and precoded secure scheme based on TS (Pre-TS), respectively. The secure performances of the proposed schemes are evaluated through the metric of probability of successfully secure transmission (PSST), which represents the probability that the interference constraint of the primary user is satisfied and the secrecy rate is positive. PSST is analyzed for the proposed secure schemes, and the closed form expressions of PSST for Pre-PS and Pre-TS are derived and validated through simulation results. Numerical results show that the precoded secure schemes have better PSST than the intuitive secure schemes under similar power consumption. When the secure schemes based on PS and TS polices have similar PSST, the average transmit power consumption of the secure scheme based on TS is lower. The influences of power splitting and time slitting ratios are also discussed through simulations. PMID:28869540

  12. Evaluating Land-Atmosphere Moisture Feedbacks in Earth System Models With Spaceborne Observations

    NASA Astrophysics Data System (ADS)

    Levine, P. A.; Randerson, J. T.; Lawrence, D. M.; Swenson, S. C.

    2016-12-01

    We have developed a set of metrics for measuring the feedback loop between the land surface moisture state and the atmosphere globally on an interannual time scale. These metrics consider both the forcing of terrestrial water storage (TWS) on subsequent atmospheric conditions as well as the response of TWS to antecedent atmospheric conditions. We designed our metrics to take advantage of more than one decade's worth of satellite observations of TWS from the Gravity Recovery and Climate Experiment (GRACE) along with atmospheric variables from the Atmospheric Infrared Sounder (AIRS), the Global Precipitation Climatology Project (GPCP), and Clouds and the Earths Radiant Energy System (CERES). Metrics derived from spaceborne observations were used to evaluate the strength of the feedback loop in the Community Earth System Model (CESM) Large Ensemble (LENS) and in several models that contributed simulations to Phase 5 of the Coupled Model Intercomparison Project (CMIP5). We found that both forcing and response limbs of the feedback loop were generally stronger in tropical and temperate regions in CMIP5 models and even more so in LENS compared to satellite observations. Our analysis suggests that models may overestimate the strength of the feedbacks between the land surface and the atmosphere, which is consistent with previous studies conducted across different spatial and temporal scales.

  13. Fusion of range camera and photogrammetry: a systematic procedure for improving 3-D models metric accuracy.

    PubMed

    Guidi, G; Beraldin, J A; Ciofi, S; Atzeni, C

    2003-01-01

    The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.

  14. Understanding the Reach of Agricultural Impacts from Climate Extremes in the Agricultural Model Intercomparison and Improvement Project (AgMIP)

    NASA Astrophysics Data System (ADS)

    Ruane, A. C.

    2016-12-01

    The Agricultural Model Intercomparison and Improvement Project (AgMIP) has been working since 2010 to build a modeling framework capable of representing the complexities of agriculture, its dependence on climate, and the many elements of society that depend on food systems. AgMIP's 30+ activities explore the interconnected nature of climate, crop, livestock, economics, food security, and nutrition, using common protocols to systematically evaluate the components of agricultural assessment and allow multi-model, multi-scale, and multi-method analysis of intertwining changes in socioeconomic development, environmental change, and technological adaptation. AgMIP is now launching Coordinated Global and Regional Assessments (CGRA) with a particular focus on unforeseen consequences of development strategies, interactions between global and local systems, and the resilience of agricultural systems to extreme climate events. Climate extremes shock the agricultural system through local, direct impacts (e.g., droughts, heat waves, floods, severe storms) and also through teleconnections propagated through international trade. As the climate changes, the nature of climate extremes affecting agriculture is also likely to change, leading to shifting intensity, duration, frequency, and geographic extents of extremes. AgMIP researchers are developing new scenario methodologies to represent near-term extreme droughts in a probabilistic manner, field experiments that impose heat wave conditions on crops, increased resolution to differentiate sub-national drought impacts, new behavioral functions that mimic the response of market actors faced with production shortfalls, analysis of impacts from simultaneous failures of multiple breadbasket regions, and more detailed mapping of food and socioeconomic indicators into food security and nutrition metrics that describe the human impact in diverse populations. Agricultural models illustrate the challenges facing agriculture, allowing resilience planning even as precise prediction of extremes remains difficult. Increased research is necessary to understand hazards, vulnerability, and exposure of populations to characterize the risk of shocks and mechanisms by which unexpected losses drive land-use transitions.

  15. Understanding software faults and their role in software reliability modeling

    NASA Technical Reports Server (NTRS)

    Munson, John C.

    1994-01-01

    This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the regression equation. Since most of the existing metrics have common elements and are linear combinations of these common elements, it seems reasonable to investigate the structure of the underlying common factors or components that make up the raw metrics. The technique we have chosen to use to explore this structure is a procedure called principal components analysis. Principal components analysis is a decomposition technique that may be used to detect and analyze collinearity in software metrics. When confronted with a large number of metrics measuring a single construct, it may be desirable to represent the set by some smaller number of variables that convey all, or most, of the information in the original set. Principal components are linear transformations of a set of random variables that summarize the information contained in the variables. The transformations are chosen so that the first component accounts for the maximal amount of variation of the measures of any possible linear transform; the second component accounts for the maximal amount of residual variation; and so on. The principal components are constructed so that they represent transformed scores on dimensions that are orthogonal. Through the use of principal components analysis, it is possible to have a set of highly related software attributes mapped into a small number of uncorrelated attribute domains. This definitively solves the problem of multi-collinearity in subsequent regression analysis. There are many software metrics in the literature, but principal component analysis reveals that there are few distinct sources of variation, i.e. dimensions, in this set of metrics. It would appear perfectly reasonable to characterize the measurable attributes of a program with a simple function of a small number of orthogonal metrics each of which represents a distinct software attribute domain.

  16. Proactive routing mutation against stealthy Distributed Denial of Service attacks: metrics, modeling, and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Qi; Al-Shaer, Ehab; Chatterjee, Samrat

    The Infrastructure Distributed Denial of Service (IDDoS) attacks continue to be one of the most devastating challenges facing cyber systems. The new generation of IDDoS attacks exploit the inherent weakness of cyber infrastructure including deterministic nature of routes, skew distribution of flows, and Internet ossification to discover the network critical links and launch highly stealthy flooding attacks that are not observable at the victim end. In this paper, first, we propose a new metric to quantitatively measure the potential susceptibility of any arbitrary target server or domain to stealthy IDDoS attacks, and es- timate the impact of such susceptibility onmore » enterprises. Second, we develop a proactive route mutation technique to minimize the susceptibility to these attacks by dynamically changing the flow paths periodically to invalidate the adversary knowledge about the network and avoid targeted critical links. Our proposed approach actively changes these network paths while satisfying security and qualify of service requirements. We present an integrated approach of proactive route mutation that combines both infrastructure-based mutation that is based on reconfiguration of switches and routers, and middle-box approach that uses an overlay of end-point proxies to construct a virtual network path free of critical links to reach a destination. We implemented the proactive path mutation technique on a Software Defined Network using the OpendDaylight controller to demonstrate a feasible deployment of this approach. Our evaluation validates the correctness, effectiveness, and scalability of the proposed approaches.« less

  17. Fuel miles and the blend wall: costs and emissions from ethanol distribution in the United States.

    PubMed

    Strogen, Bret; Horvath, Arpad; McKone, Thomas E

    2012-05-15

    From 1991 to 2009, U.S. production of ethanol increased 10-fold, largely due to government programs motivated by climate change, energy security, and economic development goals. As low-level ethanol-gasoline blends have not consistently outperformed ethanol-free gasoline in vehicle performance or tailpipe emissions, national-level economic and environmental goals could be accomplished more efficiently by concentrating consumption of gasoline containing 10% ethanol (i.e., E10) near producers to minimize freight activity. As the domestic transportation of ethanol increased 10-fold in metric ton-kilometers (t-km) from 2000 to 2009, the portion of t-km potentially justified by the E10 blend wall increased from less than 40% to 80%. However, we estimate 10 billion t-km took place annually from 2004 to 2009 for reasons other than the blend wall. This "unnecessary" transportation resulted in more than $240 million in freight costs, 90 million L of diesel consumption, 300,000 metric tons of CO(2)-e emissions, and 440 g of human intake of PM(2.5). By 2009, the marginal savings from enabling Iowa to surpass E10 would have exceeded 2.5 g CO(2)-e/MJ and $0.12/gallon of ethanol, as the next-closest customer was 1600 km away. The use of a national network model enables estimation of marginal transportation impacts from subnational policies, and benefits from policies encouraging concentrated consumption of renewable fuels.

  18. A high-definition fiber tracking report for patients with traumatic brain injury and their doctors.

    PubMed

    Chmura, Jon; Presson, Nora; Benso, Steven; Puccio, Ava M; Fissel, Katherine; Hachey, Rebecca; Braun, Emily; Okonkwo, David O; Schneider, Walter

    2015-03-01

    We have developed a tablet-based application, the High-Definition Fiber Tracking Report App, to enable clinicians and patients in research studies to see and understand damage from Traumatic Brain Injury (TBI) by viewing 2-dimensional and 3-dimensional images of their brain, with a focus on white matter tracts with quantitative metrics. The goal is to visualize white matter fiber tract injury like bone fractures; that is, to make the "invisible wounds of TBI" understandable for patients. Using mobile computing technology (iPad), imaging data for individual patients can be downloaded remotely within hours of a magnetic resonance imaging brain scan. Clinicians and patients can view the data in the form of images of each tract, rotating animations of the tracts, 3-dimensional models, and graphics. A growing number of tracts can be examined for asymmetry, gaps in streamline coverage, reduced arborization (branching), streamline volume, and standard quantitative metrics (e.g., Fractional Anisotropy (FA)). Novice users can learn to effectively navigate and interact with the application (explain the figures and graphs representing normal and injured brain tracts) within 15 minutes of simple orientation with high accuracy (96%). The architecture supports extensive graphics, configurable reports, provides an easy-to-use, attractive interface with a smooth user experience, and allows for securely serving cases from a database. Patients and clinicians have described the application as providing dramatic benefits in understanding their TBI and improving their lives. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.

  19. Machine learning of network metrics in ATLAS Distributed Data Management

    NASA Astrophysics Data System (ADS)

    Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration

    2017-10-01

    The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.

  20. Landscape patterns from mathematical morphology on maps with contagion

    Treesearch

    Kurt Riitters; Peter Vogt; Pierre Soille; Christine Estreguil

    2009-01-01

    The perceived realism of simulated maps with contagion (spatial autocorrelation) has led to their use for comparing landscape pattern metrics and as habitat maps for modeling organism movement across landscapes. The objective of this study was to conduct a neutral model analysis of pattern metrics defined by morphological spatial pattern analysis (MSPA) on maps with...

  1. Vacuum solutions around spherically symmetric and static objects in the Starobinsky model

    NASA Astrophysics Data System (ADS)

    ćıkıntoǧlu, Sercan

    2018-02-01

    The vacuum solutions around a spherically symmetric and static object in the Starobinsky model are studied with a perturbative approach. The differential equations for the components of the metric and the Ricci scalar are obtained and solved by using the method of matched asymptotic expansions. The presence of higher order terms in this gravity model leads to the formation of a boundary layer near the surface of the star allowing the accommodation of the extra boundary conditions on the Ricci scalar. Accordingly, the metric can be different from the Schwarzschild solution near the star depending on the value of the Ricci scalar at the surface of the star while matching the Schwarzschild metric far from the star.

  2. Partially supervised speaker clustering.

    PubMed

    Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S

    2012-05-01

    Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.

  3. A Quantitative Socio-hydrological Characterization of Water Security in Large-Scale Irrigation Systems

    NASA Astrophysics Data System (ADS)

    Siddiqi, A.; Muhammad, A.; Wescoat, J. L., Jr.

    2017-12-01

    Large-scale, legacy canal systems, such as the irrigation infrastructure in the Indus Basin in Punjab, Pakistan, have been primarily conceived, constructed, and operated with a techno-centric approach. The emerging socio-hydrological approaches provide a new lens for studying such systems to potentially identify fresh insights for addressing contemporary challenges of water security. In this work, using the partial definition of water security as "the reliable availability of an acceptable quantity and quality of water", supply reliability is construed as a partial measure of water security in irrigation systems. A set of metrics are used to quantitatively study reliability of surface supply in the canal systems of Punjab, Pakistan using an extensive dataset of 10-daily surface water deliveries over a decade (2007-2016) and of high frequency (10-minute) flow measurements over one year. The reliability quantification is based on comparison of actual deliveries and entitlements, which are a combination of hydrological and social constructs. The socio-hydrological lens highlights critical issues of how flows are measured, monitored, perceived, and experienced from the perspective of operators (government officials) and users (famers). The analysis reveals varying levels of reliability (and by extension security) of supply when data is examined across multiple temporal and spatial scales. The results shed new light on evolution of water security (as partially measured by supply reliability) for surface irrigation in the Punjab province of Pakistan and demonstrate that "information security" (defined as reliable availability of sufficiently detailed data) is vital for enabling water security. It is found that forecasting and management (that are social processes) lead to differences between entitlements and actual deliveries, and there is significant potential to positively affect supply reliability through interventions in the social realm.

  4. Research and development on performance models of thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Ji-hui; Jin, Wei-qi; Wang, Xia; Cheng, Yi-nan

    2009-07-01

    Traditional ACQUIRE models perform the discrimination tasks of detection (target orientation, recognition and identification) for military target based upon minimum resolvable temperature difference (MRTD) and Johnson criteria for thermal imaging systems (TIS). Johnson criteria is generally pessimistic for performance predict of sampled imager with the development of focal plane array (FPA) detectors and digital image process technology. Triangle orientation discrimination threshold (TOD) model, minimum temperature difference perceived (MTDP)/ thermal range model (TRM3) Model and target task performance (TTP) metric have been developed to predict the performance of sampled imager, especially TTP metric can provides better accuracy than the Johnson criteria. In this paper, the performance models above are described; channel width metrics have been presented to describe the synthesis performance including modulate translate function (MTF) channel width for high signal noise to ration (SNR) optoelectronic imaging systems and MRTD channel width for low SNR TIS; the under resolvable questions for performance assessment of TIS are indicated; last, the development direction of performance models for TIS are discussed.

  5. Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    1997-01-01

    Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.

  6. Application of infrared uncooled cameras in surveillance systems

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Bareła, J.; Trzaskawka, P.; PiÄ tkowski, T.

    2013-10-01

    The recent necessity to protect military bases, convoys and patrols gave serious impact to the development of multisensor security systems for perimeter protection. One of the most important devices used in such systems are IR cameras. The paper discusses technical possibilities and limitations to use uncooled IR camera in a multi-sensor surveillance system for perimeter protection. Effective ranges of detection depend on the class of the sensor used and the observed scene itself. Application of IR camera increases the probability of intruder detection regardless of the time of day or weather conditions. It also simultaneously decreased the false alarm rate produced by the surveillance system. The role of IR cameras in the system was discussed as well as technical possibilities to detect human being. Comparison of commercially available IR cameras, capable to achieve desired ranges was done. The required spatial resolution for detection, recognition and identification was calculated. The simulation of detection ranges was done using a new model for predicting target acquisition performance which uses the Targeting Task Performance (TTP) metric. Like its predecessor, the Johnson criteria, the new model bounds the range performance with image quality. The scope of presented analysis is limited to the estimation of detection, recognition and identification ranges for typical thermal cameras with uncooled microbolometer focal plane arrays. This type of cameras is most widely used in security systems because of competitive price to performance ratio. Detection, recognition and identification range calculations were made, and the appropriate results for the devices with selected technical specifications were compared and discussed.

  7. Evaluating ET estimates from the Simplified Surface Energy Balance (SSEB) model using METRIC model output

    NASA Astrophysics Data System (ADS)

    Senay, G. B.; Budde, M. E.; Allen, R. G.; Verdin, J. P.

    2008-12-01

    Evapotranspiration (ET) is an important component of the hydrologic budget because it expresses the exchange of mass and energy between the soil-water-vegetation system and the atmosphere. Since direct measurement of ET is difficult, various modeling methods are used to estimate actual ET (ETa). Generally, the choice of method for ET estimation depends on the objective of the study and is further limited by the availability of data and desired accuracy of the ET estimate. Operational monitoring of crop performance requires processing large data sets and a quick response time. A Simplified Surface Energy Balance (SSEB) model was developed by the U.S. Geological Survey's Famine Early Warning Systems Network to estimate irrigation water use in remote places of the world. In this study, we evaluated the performance of the SSEB model with the METRIC (Mapping Evapotranspiration at high Resolution and with Internalized Calibration) model that has been evaluated by several researchers using the Lysimeter data. The METRIC model has been proven to provide reliable ET estimates in different regions of the world. Reference ET fractions of both models (ETrF of METRIC vs. ETf of SSEB) were generated and compared using individual Landsat thermal images collected from 2000 though 2005 in Idaho, New Mexico, and California. In addition, the models were compared using monthly and seasonal total ETa estimates. The SSEB model reproduced both the spatial and temporal variability exhibited by METRIC on land surfaces, explaining up to 80 percent of the spatial variability. However, the ETa estimates over water bodies were systematically higher in the SSEB output, which could be improved by using a correction coefficient to take into account the absorption of solar energy by deeper water layers that has little contribution to the ET process. This study demonstrated the usefulness of the SSEB method for large-scale agro-hydrologic applications for operational monitoring and assessing of crop performance and regional water balance dynamics.

  8. Modeling nutrient in-stream processes at the watershed scale using Nutrient Spiralling metrics

    NASA Astrophysics Data System (ADS)

    Marcé, R.; Armengol, J.

    2009-01-01

    One of the fundamental problems of using large-scale biogeochemical models is the uncertainty involved in aggregating the components of fine-scale deterministic models in watershed applications, and in extrapolating the results of field-scale measurements to larger spatial scales. Although spatial or temporal lumping may reduce the problem, information obtained during fine-scale research may not apply to lumped categories. Thus, the use of knowledge gained through fine-scale studies to predict coarse-scale phenomena is not straightforward. In this study, we used the nutrient uptake metrics defined in the Nutrient Spiralling concept to formulate the equations governing total phosphorus in-stream fate in a watershed-scale biogeochemical model. The rationale of this approach relies on the fact that the working unit for the nutrient in-stream processes of most watershed-scale models is the reach, the same unit used in field research based on the Nutrient Spiralling concept. Automatic calibration of the model using data from the study watershed confirmed that the Nutrient Spiralling formulation is a convenient simplification of the biogeochemical transformations involved in total phosphorus in-stream fate. Following calibration, the model was used as a heuristic tool in two ways. First, we compared the Nutrient Spiralling metrics obtained during calibration with results obtained during field-based research in the study watershed. The simulated and measured metrics were similar, suggesting that information collected at the reach scale during research based on the Nutrient Spiralling concept can be directly incorporated into models, without the problems associated with upscaling results from fine-scale studies. Second, we used results from our model to examine some patterns observed in several reports on Nutrient Spiralling metrics measured in impaired streams. Although these two exercises involve circular reasoning and, consequently, cannot validate any hypothesis, this is a powerful example of how models can work as heuristic tools to compare hypotheses and stimulate research in ecology.

  9. Wearable Inertial Sensors Allow for Quantitative Assessment of Shoulder and Elbow Kinematics in a Cadaveric Knee Arthroscopy Model.

    PubMed

    Rose, Michael; Curtze, Carolin; O'Sullivan, Joseph; El-Gohary, Mahmoud; Crawford, Dennis; Friess, Darin; Brady, Jacqueline M

    2017-12-01

    To develop a model using wearable inertial sensors to assess the performance of orthopaedic residents while performing a diagnostic knee arthroscopy. Fourteen subjects performed a diagnostic arthroscopy on a cadaveric right knee. Participants were divided into novices (5 postgraduate year 3 residents), intermediates (5 postgraduate year 4 residents), and experts (4 faculty) based on experience. Arm movement data were collected by inertial measurement units (Opal sensors) by securing 2 sensors to each upper extremity (dorsal forearm and lateral arm) and 2 sensors to the trunk (sternum and lumbar spine). Kinematics of the elbow and shoulder joints were calculated from the inertial data by biomechanical modeling based on a sequence of links connected by joints. Range of motion required to complete the procedure was calculated for each group. Histograms were used to compare the distribution of joint positions for an expert, intermediate, and novice. For both the right and left upper extremities, skill level corresponded well with shoulder abduction-adduction and elbow prono-supination. Novices required on average 17.2° more motion in the right shoulder abduction-adduction plane than experts to complete the diagnostic arthroscopy (P = .03). For right elbow prono-supination (probe hand), novices required on average 23.7° more motion than experts to complete the procedure (P = .03). Histogram data showed novices had markedly more variability in shoulder abduction-adduction and elbow prono-supination compared with the other groups. Our data show wearable inertial sensors can measure joint kinematics during diagnostic knee arthroscopy. Range-of-motion data in the shoulder and elbow correlated inversely with arthroscopic experience. Motion pattern-based analysis shows promise as a metric of resident skill acquisition and development in arthroscopy. Wearable inertial sensors show promise as metrics of arthroscopic skill acquisition among residents. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  10. Aboveground Biomass Estimation Using Reconstructed Feature of Airborne Discrete-Return LIDAR by Auto-Encoder Neural Network

    NASA Astrophysics Data System (ADS)

    Li, T.; Wang, Z.; Peng, J.

    2018-04-01

    Aboveground biomass (AGB) estimation is critical for quantifying carbon stocks and essential for evaluating carbon cycle. In recent years, airborne LiDAR shows its great ability for highly-precision AGB estimation. Most of the researches estimate AGB by the feature metrics extracted from the canopy height distribution of the point cloud which calculated based on precise digital terrain model (DTM). However, if forest canopy density is high, the probability of the LiDAR signal penetrating the canopy is lower, resulting in ground points is not enough to establish DTM. Then the distribution of forest canopy height is imprecise and some critical feature metrics which have a strong correlation with biomass such as percentiles, maximums, means and standard deviations of canopy point cloud can hardly be extracted correctly. In order to address this issue, we propose a strategy of first reconstructing LiDAR feature metrics through Auto-Encoder neural network and then using the reconstructed feature metrics to estimate AGB. To assess the prediction ability of the reconstructed feature metrics, both original and reconstructed feature metrics were regressed against field-observed AGB using the multiple stepwise regression (MS) and the partial least squares regression (PLS) respectively. The results showed that the estimation model using reconstructed feature metrics improved R2 by 5.44 %, 18.09 %, decreased RMSE value by 10.06 %, 22.13 % and reduced RMSEcv by 10.00 %, 21.70 % for AGB, respectively. Therefore, reconstructing LiDAR point feature metrics has potential for addressing AGB estimation challenge in dense canopy area.

  11. Characterization of exposure in epidemiological studies on air pollution from biodegradable wastes: Misclassification and comparison of exposure assessment strategies.

    PubMed

    Cantuaria, Manuella Lech; Suh, Helen; Løfstrøm, Per; Blanes-Vidal, Victoria

    2016-11-01

    The assignment of exposure is one of the main challenges faced by environmental epidemiologists. However, misclassification of exposures has not been explored in population epidemiological studies on air pollution from biodegradable wastes. The objective of this study was to investigate the use of different approaches for assessing exposure to air pollution from biodegradable wastes by analyzing (1) the misclassification of exposure that is committed by using these surrogates, (2) the existence of differential misclassification (3) the effects that misclassification may have on health effect estimates and the interpretation of epidemiological results, and (4) the ability of the exposure measures to predict health outcomes using 10-fold cross validation. Four different exposure assessment approaches were studied: ammonia concentrations at the residence (Metric I), distance to the closest source (Metric II), number of sources within certain distances from the residence (Metric IIIa,b) and location in a specific region (Metric IV). Exposure-response models based on Metric I provided the highest predictive ability (72.3%) and goodness-of-fit, followed by IV, III and II. When compared to Metric I, Metric IV yielded the best results for exposure misclassification analysis and interpretation of health effect estimates, followed by Metric IIIb, IIIa and II. The study showed that modelled NH 3 concentrations provide more accurate estimations of true exposure than distances-based surrogates, and that distance-based surrogates (especially those based on distance to the closest point source) are imprecise methods to identify exposed populations, although they may be useful for initial studies. Copyright © 2016 Elsevier GmbH. All rights reserved.

  12. Analysis of dynamically stable patterns in a maze-like corridor using the Wasserstein metric.

    PubMed

    Ishiwata, Ryosuke; Kinukawa, Ryota; Sugiyama, Yuki

    2018-04-23

    The two-dimensional optimal velocity (2d-OV) model represents a dissipative system with asymmetric interactions, thus being suitable to reproduce behaviours such as pedestrian dynamics and the collective motion of living organisms. In this study, we found that particles in the 2d-OV model form optimal patterns in a maze-like corridor. Then, we estimated the stability of such patterns using the Wasserstein metric. Furthermore, we mapped these patterns into the Wasserstein metric space and represented them as points in a plane. As a result, we discovered that the stability of the dynamical patterns is strongly affected by the model sensitivity, which controls the motion of each particle. In addition, we verified the existence of two stable macroscopic patterns which were cohesive, stable, and appeared regularly over the time evolution of the model.

  13. Human eyeball model reconstruction and quantitative analysis.

    PubMed

    Xing, Qi; Wei, Qi

    2014-01-01

    Determining shape of the eyeball is important to diagnose eyeball disease like myopia. In this paper, we present an automatic approach to precisely reconstruct three dimensional geometric shape of eyeball from MR Images. The model development pipeline involved image segmentation, registration, B-Spline surface fitting and subdivision surface fitting, neither of which required manual interaction. From the high resolution resultant models, geometric characteristics of the eyeball can be accurately quantified and analyzed. In addition to the eight metrics commonly used by existing studies, we proposed two novel metrics, Gaussian Curvature Analysis and Sphere Distance Deviation, to quantify the cornea shape and the whole eyeball surface respectively. The experiment results showed that the reconstructed eyeball models accurately represent the complex morphology of the eye. The ten metrics parameterize the eyeball among different subjects, which can potentially be used for eye disease diagnosis.

  14. Regime-Based Evaluation of Cloudiness in CMIP5 Models

    NASA Technical Reports Server (NTRS)

    Jin, Daeho; Oraiopoulos, Lazaros; Lee, Dong Min

    2016-01-01

    The concept of Cloud Regimes (CRs) is used to develop a framework for evaluating the cloudiness of 12 fifth Coupled Model Intercomparison Project (CMIP5) models. Reference CRs come from existing global International Satellite Cloud Climatology Project (ISCCP) weather states. The evaluation is made possible by the implementation in several CMIP5 models of the ISCCP simulator generating for each gridcell daily joint histograms of cloud optical thickness and cloud top pressure. Model performance is assessed with several metrics such as CR global cloud fraction (CF), CR relative frequency of occurrence (RFO), their product (long-term average total cloud amount [TCA]), cross-correlations of CR RFO maps, and a metric of resemblance between model and ISCCP CRs. In terms of CR global RFO, arguably the most fundamental metric, the models perform unsatisfactorily overall, except for CRs representing thick storm clouds. Because model CR CF is internally constrained by our method, RFO discrepancies yield also substantial TCA errors. Our findings support previous studies showing that CMIP5 models underestimate cloudiness. The multi-model mean performs well in matching observed RFO maps for many CRs, but is not the best for this or other metrics. When overall performance across all CRs is assessed, some models, despite their shortcomings, apparently outperform Moderate Resolution Imaging Spectroradiometer (MODIS) cloud observations evaluated against ISCCP as if they were another model output. Lastly, cloud simulation performance is contrasted with each model's equilibrium climate sensitivity (ECS) in order to gain insight on whether good cloud simulation pairs with particular values of this parameter.

  15. Virginia flow-ecology modeling results—An initial assessment of flow reduction effects on aquatic biota

    USGS Publications Warehouse

    Rapp, Jennifer L.; Reilly, Pamela A.

    2017-11-14

    BackgroundThe U.S. Geological Survey (USGS), in cooperation with the Virginia Department of Environmental Quality (DEQ), reviewed a previously compiled set of linear regression models to assess their utility in defining the response of the aquatic biological community to streamflow depletion.As part of the 2012 Virginia Healthy Watersheds Initiative (HWI) study conducted by Tetra Tech, Inc., for the U.S. Environmental Protection Agency (EPA) and Virginia DEQ, a database with computed values of 72 hydrologic metrics, or indicators of hydrologic alteration (IHA), 37 fish metrics, and 64 benthic invertebrate metrics was compiled and quality assured. Hydrologic alteration was represented by simulation of streamflow record for a pre-water-withdrawal condition (baseline) without dams or developed land, compared to the simulated recent-flow condition (2008 withdrawal simulation) including dams and altered landscape to calculate a percent alteration of flow. Biological samples representing the existing populations represent a range of alteration in the biological community today.For this study, all 72 IHA metrics, which included more than 7,272 linear regression models, were considered. This extensive dataset provided the opportunity for hypothesis testing and prioritization of flow-ecology relations that have the potential to explain the effect(s) of hydrologic alteration on biological metrics in Virginia streams.

  16. Toward a perceptual video-quality metric

    NASA Astrophysics Data System (ADS)

    Watson, Andrew B.

    1998-07-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.

  17. A genetic epidemiology approach to cyber-security.

    PubMed

    Gil, Santiago; Kott, Alexander; Barabási, Albert-László

    2014-07-16

    While much attention has been paid to the vulnerability of computer networks to node and link failure, there is limited systematic understanding of the factors that determine the likelihood that a node (computer) is compromised. We therefore collect threat log data in a university network to study the patterns of threat activity for individual hosts. We relate this information to the properties of each host as observed through network-wide scans, establishing associations between the network services a host is running and the kinds of threats to which it is susceptible. We propose a methodology to associate services to threats inspired by the tools used in genetics to identify statistical associations between mutations and diseases. The proposed approach allows us to determine probabilities of infection directly from observation, offering an automated high-throughput strategy to develop comprehensive metrics for cyber-security.

  18. A genetic epidemiology approach to cyber-security

    PubMed Central

    Gil, Santiago; Kott, Alexander; Barabási, Albert-László

    2014-01-01

    While much attention has been paid to the vulnerability of computer networks to node and link failure, there is limited systematic understanding of the factors that determine the likelihood that a node (computer) is compromised. We therefore collect threat log data in a university network to study the patterns of threat activity for individual hosts. We relate this information to the properties of each host as observed through network-wide scans, establishing associations between the network services a host is running and the kinds of threats to which it is susceptible. We propose a methodology to associate services to threats inspired by the tools used in genetics to identify statistical associations between mutations and diseases. The proposed approach allows us to determine probabilities of infection directly from observation, offering an automated high-throughput strategy to develop comprehensive metrics for cyber-security. PMID:25028059

  19. The doctrinal basis for medical stability operations.

    PubMed

    Baker, Jay B

    2010-01-01

    This article describes possible roles for the military in the health sector during stability operations, which exist primarily when security conditions do not permit the free movement of civilian actors. This article reviews the new U.S. Army Field Manuals (FMs) 3-24, Counterinsurgency and FM 3-07, Stability Operations, in the context of the health sector. Essential tasks in medical stability operations are identified for various logical lines of operation including information operations, civil security, civil control, support to governance, support to economic development, and restoration of essential services. Restoring essential services is addressed in detail including coordination, assessment, actions, and metrics in the health sector. Coordination by the military with other actors in the health sector including host nation medical officials, other United States governmental agencies, international governmental organizations (IGOs), and nongovernment organizations (NGOs) is key to success in medical stability operations.

  20. Summer temperature metrics for predicting brook trout (Salvelinus fontinalis) distribution in streams

    USGS Publications Warehouse

    Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.

    2012-01-01

    We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.

  1. An Abstract Process and Metrics Model for Evaluating Unified Command and Control: A Scenario and Technology Agnostic Approach

    DTIC Science & Technology

    2004-06-01

    18 EBO Cognitive or Memetic input type ..................................................................... 18 Unanticipated EBO generated... Memetic Effects Based COA.................................................................................... 23 Policy...41 Belief systems or Memetic Content Metrics

  2. Ruijsenaars-Schneider three-body models with N = 2 supersymmetry

    NASA Astrophysics Data System (ADS)

    Galajinsky, Anton

    2018-04-01

    The Ruijsenaars-Schneider models are conventionally regarded as relativistic generalizations of the Calogero integrable systems. Surprisingly enough, their supersymmetric generalizations escaped attention. In this work, N = 2 supersymmetric extensions of the rational and hyperbolic Ruijsenaars-Schneider three-body models are constructed within the framework of the Hamiltonian formalism. It is also known that the rational model can be described by the geodesic equations associated with a metric connection. We demonstrate that the hyperbolic systems are linked to non-metric connections.

  3. A note on evaluating model tidal currents against observations

    NASA Astrophysics Data System (ADS)

    Cummins, Patrick F.; Thupaki, Pramod

    2018-01-01

    The root-mean-square magnitude of the vector difference between modeled and observed tidal ellipses is a comprehensive metric to evaluate the representation of tidal currents in ocean models. A practical expression for this difference is given in terms of the harmonic constants that are routinely used to specify current ellipses for a given tidal constituent. The resulting metric is sensitive to differences in all four current ellipse parameters, including phase.

  4. Resilience Metrics for the Electric Power System: A Performance-Based Approach.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vugrin, Eric D.; Castillo, Andrea R; Silva-Monroy, Cesar Augusto

    Grid resilience is a concept related to a power system's ability to continue operating and delivering power even in the event that low probability, high-consequence disruptions such as hurricanes, earthquakes, and cyber-attacks occur. Grid resilience objectives focus on managing and, ideally, minimizing potential consequences that occur as a result of these disruptions. Currently, no formal grid resilience definitions, metrics, or analysis methods have been universally accepted. This document describes an effort to develop and describe grid resilience metrics and analysis methods. The metrics and methods described herein extend upon the Resilience Analysis Process (RAP) developed by Watson et al. formore » the 2015 Quadrennial Energy Review. The extension allows for both outputs from system models and for historical data to serve as the basis for creating grid resilience metrics and informing grid resilience planning and response decision-making. This document describes the grid resilience metrics and analysis methods. Demonstration of the metrics and methods is shown through a set of illustrative use cases.« less

  5. Optimal SVM parameter selection for non-separable and unbalanced datasets.

    PubMed

    Jiang, Peng; Missoum, Samy; Chen, Zhao

    2014-10-01

    This article presents a study of three validation metrics used for the selection of optimal parameters of a support vector machine (SVM) classifier in the case of non-separable and unbalanced datasets. This situation is often encountered when the data is obtained experimentally or clinically. The three metrics selected in this work are the area under the ROC curve (AUC), accuracy, and balanced accuracy. These validation metrics are tested using computational data only, which enables the creation of fully separable sets of data. This way, non-separable datasets, representative of a real-world problem, can be created by projection onto a lower dimensional sub-space. The knowledge of the separable dataset, unknown in real-world problems, provides a reference to compare the three validation metrics using a quantity referred to as the "weighted likelihood". As an application example, the study investigates a classification model for hip fracture prediction. The data is obtained from a parameterized finite element model of a femur. The performance of the various validation metrics is studied for several levels of separability, ratios of unbalance, and training set sizes.

  6. Turbulence Hazard Metric Based on Peak Accelerations for Jetliner Passengers

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.

    2005-01-01

    Calculations are made of the approximate hazard due to peak normal accelerations of an airplane flying through a simulated vertical wind field associated with a convective frontal system. The calculations are based on a hazard metric developed from a systematic application of a generic math model to 1-cosine discrete gusts of various amplitudes and gust lengths. The math model simulates the three degree-of- freedom longitudinal rigid body motion to vertical gusts and includes (1) fuselage flexibility, (2) the lag in the downwash from the wing to the tail, (3) gradual lift effects, (4) a simplified autopilot, and (5) motion of an unrestrained passenger in the rear cabin. Airplane and passenger response contours are calculated for a matrix of gust amplitudes and gust lengths. The airplane response contours are used to develop an approximate hazard metric of peak normal accelerations as a function of gust amplitude and gust length. The hazard metric is then applied to a two-dimensional simulated vertical wind field of a convective frontal system. The variations of the hazard metric with gust length and airplane heading are demonstrated.

  7. MESUR: USAGE-BASED METRICS OF SCHOLARLY IMPACT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BOLLEN, JOHAN; RODRIGUEZ, MARKO A.; VAN DE SOMPEL, HERBERT

    2007-01-30

    The evaluation of scholarly communication items is now largely a matter of expert opinion or metrics derived from citation data. Both approaches can fail to take into account the myriad of factors that shape scholarly impact. Usage data has emerged as a promising complement to existing methods o fassessment but the formal groundwork to reliably and validly apply usage-based metrics of schlolarly impact is lacking. The Andrew W. Mellon Foundation funded MESUR project constitutes a systematic effort to define, validate and cross-validate a range of usage-based metrics of schlolarly impact by creating a semantic model of the scholarly communication process.more » The constructed model will serve as the basis of a creating a large-scale semantic network that seamlessly relates citation, bibliographic and usage data from a variety of sources. A subsequent program that uses the established semantic network as a reference data set will determine the characteristics and semantics of a variety of usage-based metrics of schlolarly impact. This paper outlines the architecture and methodology adopted by the MESUR project and its future direction.« less

  8. The use of player physical and technical skill match activity profiles to predict position in the Australian Football League draft.

    PubMed

    Woods, Carl T; Veale, James P; Collier, Neil; Robertson, Sam

    2017-02-01

    This study investigated the extent to which position in the Australian Football League (AFL) national draft is associated with individual game performance metrics. Physical/technical skill performance metrics were collated from all participants in the 2014 national under 18 (U18) championships (18 games) drafted into the AFL (n = 65; 17.8 ± 0.5 y); 232 observations. Players were subdivided into draft position (ranked 1-65) and then draft round (1-4). Here, earlier draft selection (i.e., closer to 1) reflects a more desirable player. Microtechnology and a commercial provider facilitated the quantification of individual game performance metrics (n = 16). Linear mixed models were fitted to data, modelling the extent to which draft position was associated with these metrics. Draft position in the first/second round was negatively associated with "contested possessions" and "contested marks", respectively. Physical performance metrics were positively associated with draft position in these rounds. Correlations weakened for the third/fourth rounds. Contested possessions/marks were associated with an earlier draft selection. Physical performance metrics were associated with a later draft selection. Recruiters change the type of U18 player they draft as the selection pool reduces. juniors with contested skill appear prioritised.

  9. A reservoir morphology database for the conterminous United States

    USGS Publications Warehouse

    Rodgers, Kirk D.

    2017-09-13

    The U.S. Geological Survey, in cooperation with the Reservoir Fisheries Habitat Partnership, combined multiple national databases to create one comprehensive national reservoir database and to calculate new morphological metrics for 3,828 reservoirs. These new metrics include, but are not limited to, shoreline development index, index of basin permanence, development of volume, and other descriptive metrics based on established morphometric formulas. The new database also contains modeled chemical and physical metrics. Because of the nature of the existing databases used to compile the Reservoir Morphology Database and the inherent missing data, some metrics were not populated. One comprehensive database will assist water-resource managers in their understanding of local reservoir morphology and water chemistry characteristics throughout the continental United States.

  10. Surface metrics: An alternative to patch metrics for the quantification of landscape structure

    Treesearch

    Kevin McGarigal; Sermin Tagil; Samuel A. Cushman

    2009-01-01

    Modern landscape ecology is based on the patch mosaic paradigm, in which landscapes are conceptualized and analyzed as mosaics of discrete patches. While this model has been widely successful, there are many situations where it is more meaningful to model landscape structure based on continuous rather than discrete spatial heterogeneity. The growing field of surface...

  11. Predicting surface fuel models and fuel metrics using lidar and CIR imagery in a dense mixed conifer forest

    Treesearch

    Marek K. Jakubowksi; Qinghua Guo; Brandon Collins; Scott Stephens; Maggi Kelly

    2013-01-01

    We compared the ability of several classification and regression algorithms to predict forest stand structure metrics and standard surface fuel models. Our study area spans a dense, topographically complex Sierra Nevada mixed-conifer forest. We used clustering, regression trees, and support vector machine algorithms to analyze high density (average 9 pulses/m

  12. Systematic study of source mask optimization and verification flows

    NASA Astrophysics Data System (ADS)

    Ben, Yu; Latypov, Azat; Chua, Gek Soon; Zou, Yi

    2012-06-01

    Source mask optimization (SMO) emerged as powerful resolution enhancement technique (RET) for advanced technology nodes. However, there is a plethora of flow and verification metrics in the field, confounding the end user of the technique. Systemic study of different flows and the possible unification thereof is missing. This contribution is intended to reveal the pros and cons of different SMO approaches and verification metrics, understand the commonality and difference, and provide a generic guideline for RET selection via SMO. The paper discusses 3 different type of variations commonly arise in SMO, namely pattern preparation & selection, availability of relevant OPC recipe for freeform source and finally the metrics used in source verification. Several pattern selection algorithms are compared and advantages of systematic pattern selection algorithms are discussed. In the absence of a full resist model for SMO, alternative SMO flow without full resist model is reviewed. Preferred verification flow with quality metrics of DOF and MEEF is examined.

  13. Older driver fitness-to-drive evaluation using naturalistic driving data.

    PubMed

    Guo, Feng; Fang, Youjia; Antin, Jonathan F

    2015-09-01

    As our driving population continues to age, it is becoming increasingly important to find a small set of easily administered fitness metrics that can meaningfully and reliably identify at-risk seniors requiring more in-depth evaluation of their driving skills and weaknesses. Sixty driver assessment metrics related to fitness-to-drive were examined for 20 seniors who were followed for a year using the naturalistic driving paradigm. Principal component analysis and negative binomial regression modeling approaches were used to develop parsimonious models relating the most highly predictive of the driver assessment metrics to the safety-related outcomes observed in the naturalistic driving data. This study provides important confirmation using naturalistic driving methods of the relationship between contrast sensitivity and crash-related events. The results of this study provide crucial information on the continuing journey to identify metrics and protocols that could be applied to determine seniors' fitness to drive. Published by Elsevier Ltd.

  14. An end-to-end assessment of extreme weather impacts on food security

    NASA Astrophysics Data System (ADS)

    Chavez, Erik; Conway, Gordon; Ghil, Michael; Sadler, Marc

    2015-11-01

    Both governments and the private sector urgently require better estimates of the likely incidence of extreme weather events, their impacts on food crop production and the potential consequent social and economic losses. Current assessments of climate change impacts on agriculture mostly focus on average crop yield vulnerability to climate and adaptation scenarios. Also, although new-generation climate models have improved and there has been an exponential increase in available data, the uncertainties in their projections over years and decades, and at regional and local scale, have not decreased. We need to understand and quantify the non-stationary, annual and decadal climate impacts using simple and communicable risk metrics that will help public and private stakeholders manage the hazards to food security. Here we present an `end-to-end’ methodological construct based on weather indices and machine learning that integrates current understanding of the various interacting systems of climate, crops and the economy to determine short- to long-term risk estimates of crop production loss, in different climate and adaptation scenarios. For provinces north and south of the Yangtze River in China, we have found that risk profiles for crop yields that translate climate into economic variability follow marked regional patterns, shaped by drivers of continental-scale climate. We conclude that to be cost-effective, region-specific policies have to be tailored to optimally combine different categories of risk management instruments.

  15. Water-Food-Nutrition-Health Nexus: Linking Water to Improving Food, Nutrition and Health in Sub-Saharan Africa

    PubMed Central

    Mabhaudhi, Tafadzwanashe; Chibarabada, Tendai; Modi, Albert

    2016-01-01

    Whereas sub-Saharan Africa’s (SSA) water scarcity, food, nutrition and health challenges are well-documented, efforts to address them have often been disconnected. Given that the region continues to be affected by poverty and food and nutrition insecurity at national and household levels, there is a need for a paradigm shift in order to effectively deliver on the twin challenges of food and nutrition security under conditions of water scarcity. There is a need to link water use in agriculture to achieve food and nutrition security outcomes for improved human health and well-being. Currently, there are no explicit linkages between water, agriculture, nutrition and health owing to uncoordinated efforts between agricultural and nutrition scientists. There is also a need to develop and promote the use of metrics that capture aspects of water, agriculture, food and nutrition. This review identified nutritional water productivity as a suitable index for measuring the impact of a water-food-nutrition-health nexus. Socio-economic factors are also considered as they influence food choices in rural communities. An argument for the need to utilise the region’s agrobiodiversity for addressing dietary quality and diversity was established. It is concluded that a model for improving nutrition and health of poor rural communities based on the water-food-nutrition-health nexus is possible. PMID:26751464

  16. The virial theorem and the dark matter problem in hybrid metric-Palatini gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capozziello, Salvatore; Harko, Tiberiu; Koivisto, Tomi S.

    2013-07-01

    Hybrid metric-Palatini gravity is a recently proposed theory, consisting of the superposition of the metric Einstein-Hilbert Lagrangian with an f(R) term constructed à la Palatini. The theory predicts the existence of a long-range scalar field, which passes the Solar System observational constraints, even if the scalar field is very light, and modifies the cosmological and galactic dynamics. Thus, the theory opens new possibilities to approach, in the same theoretical framework, the problems of both dark energy and dark matter. In this work, we consider the generalized virial theorem in the scalar-tensor representation of the hybrid metric-Palatini gravity. More specifically, takingmore » into account the relativistic collisionless Boltzmann equation, we show that the supplementary geometric terms in the gravitational field equations provide an effective contribution to the gravitational potential energy. We show that the total virial mass is proportional to the effective mass associated with the new terms generated by the effective scalar field, and the baryonic mass. In addition to this, we also consider astrophysical applications of the model and show that the model predicts that the mass associated to the scalar field and its effects extend beyond the virial radius of the clusters of galaxies. In the context of the galaxy cluster velocity dispersion profiles predicted by the hybrid metric-Palatini model, the generalized virial theorem can be an efficient tool in observationally testing the viability of this class of generalized gravity models.« less

  17. Susceptibility of South Korea to hydrologic extremes affecting the global food system

    NASA Astrophysics Data System (ADS)

    Puma, M. J.; Chon, S. Y.

    2015-12-01

    Food security in South Korea is closely linked to trade in the global food system. The country's production of major grains declined from 5.8 million metric tons (mmt) in 1998 to 4.8 mmt in 2014, which coincided with a shift in grain self sufficiency from 43% down to 24% over this same period. Many factors led to these changes, including reductions in domestic agricultural land, governmental policies supporting industry over agriculture, and a push towards trade liberalization. South Korea's self sufficiency is now one of the lowest among Organisation for Economic Co-operation and Development (OECD) countries, leaving it vulnerable to disruptions in the global food system. Here we explore this vulnerability by assessing how global trade disruptions would affect Korea's food security. We impose historical extreme drought and flood events that would possibly affect today's major food producing regions concurrently. Next we compute food supply deficits in South Korea that might result from these events. Our analyses provide a framework for formulating domestic food policies to enhance South Korea's food security in the increasingly fragile global food system.

  18. Advanced Life Support Research and Technology Development Metric

    NASA Technical Reports Server (NTRS)

    Hanford, A. J.

    2004-01-01

    The Metric is one of several measures employed by the NASA to assess the Agency s progress as mandated by the United States Congress and the Office of Management and Budget. Because any measure must have a reference point, whether explicitly defined or implied, the Metric is a comparison between a selected ALS Project life support system and an equivalently detailed life support system using technology from the Environmental Control and Life Support System (ECLSS) for the International Space Station (ISS). This document provides the official calculation of the Advanced Life Support (ALS) Research and Technology Development Metric (the Metric) for Fiscal Year 2004. The values are primarily based on Systems Integration, Modeling, and Analysis (SIMA) Element approved software tools or reviewed and approved reference documents. For Fiscal Year 2004, the Advanced Life Support Research and Technology Development Metric value is 2.03 for an Orbiting Research Facility and 1.62 for an Independent Exploration Mission.

  19. Study of the Ernst metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esteban, E.P.

    In this thesis some properties of the Ernst metric are studied. This metric could provide a model for a Schwarzschild black hole immersed in a magnetic field. In chapter I, some standard propertiess of the Ernst's metric such as the affine connections, the Riemann, the Ricci, and the Weyl conformal tensor are calculated. In chapter II, the geodesics described by test particles in the Ernst space-time are studied. As an application a formula for the perihelion shift is derived. In the last chapter a null tetrad analysis of the Ernst metric is carried out and the resulting formalism applied tomore » the study of three problems. First, the algebraic classification of the Ernst metric is determined to be of type I in the Petrov scheme. Secondly, an explicit formula for the Gaussian curvature for the event horizon is derived. Finally, the form of the electromagnetic field is evaluated.« less

  20. Application of Bounded Linear Stability Analysis Method for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics-driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a second order system that represents a pitch attitude control of a generic transport aircraft. The analysis shows that the system with the metrics-conforming variable adaptive gain becomes more robust to unmodeled dynamics or time delay. The effect of analysis time-window for BLSA is also evaluated in order to meet the stability margin criteria.

  1. Assessment of six dissimilarity metrics for climate analogues

    NASA Astrophysics Data System (ADS)

    Grenier, Patrick; Parent, Annie-Claude; Huard, David; Anctil, François; Chaumont, Diane

    2013-04-01

    Spatial analogue techniques consist in identifying locations whose recent-past climate is similar in some aspects to the future climate anticipated at a reference location. When identifying analogues, one key step is the quantification of the dissimilarity between two climates separated in time and space, which involves the choice of a metric. In this communication, spatial analogues and their usefulness are briefly discussed. Next, six metrics are presented (the standardized Euclidean distance, the Kolmogorov-Smirnov statistic, the nearest-neighbor distance, the Zech-Aslan energy statistic, the Friedman-Rafsky runs statistic and the Kullback-Leibler divergence), along with a set of criteria used for their assessment. The related case study involves the use of numerical simulations performed with the Canadian Regional Climate Model (CRCM-v4.2.3), from which three annual indicators (total precipitation, heating degree-days and cooling degree-days) are calculated over 30-year periods (1971-2000 and 2041-2070). Results indicate that the six metrics identify comparable analogue regions at a relatively large scale, but best analogues may differ substantially. For best analogues, it is also shown that the uncertainty stemming from the metric choice does generally not exceed that stemming from the simulation or model choice. A synthesis of the advantages and drawbacks of each metric is finally presented, in which the Zech-Aslan energy statistic stands out as the most recommended metric for analogue studies, whereas the Friedman-Rafsky runs statistic is the least recommended, based on this case study.

  2. Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes.

    PubMed

    Yates, Katherine L; Mellin, Camille; Caley, M Julian; Radford, Ben T; Meeuwig, Jessica J

    2016-01-01

    Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are not fully captured by remotely sensed data. As such, the use of remotely sensed data to model biodiversity represents a compromise between model performance and data availability.

  3. Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes

    PubMed Central

    Yates, Katherine L.; Mellin, Camille; Caley, M. Julian; Radford, Ben T.; Meeuwig, Jessica J.

    2016-01-01

    Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are not fully captured by remotely sensed data. As such, the use of remotely sensed data to model biodiversity represents a compromise between model performance and data availability. PMID:27333202

  4. Climate Change and the Central American Mid-summer Drought - The Importance of Changing Precipitation Patterns for Food and Water Security

    NASA Astrophysics Data System (ADS)

    Maurer, E. P.; Stewart, I. T.; Sundstrom, W.; Bacon, C. M.

    2016-12-01

    In addition to periodic long-term drought, much of Central America experiences a rainy season with two peaks separated by a dry period of weeks to over a month in duration, termed the mid-summer drought (MSD). Food and water security for smallholder farmers in the region hinge on accommodating this phenomenon, anticipating its arrival and estimating its duration. Model output from 1980 through the late 21st century projects changes in precipitation amount, variability, and timing, with potential to affect regional food production. Using surveys of farmer experiences in conjunction with gridded daily precipitation for a historic period on multiple scales, and with projections through the 21st century, we characterize the MSD across much of Central America using four measures: onset date, duration, intensity, and minimum, and test for significant changes. Our findings indicate that the most significant changes are for the duration, which, by the end of the century, is projected to increase by an average of over a week, and the MSD minimum precipitation, which is projected to decline by an average of over 26%, with statistically significant changes for most of Nicaragua, Honduras, El Salvador, and Guatemala (assuming a higher emissions pathway through the 21st century). These changes toward a longer and drier MSD have important implications for food and water security for vulnerable communities through the region. We find that for the four metrics the changes in interannual variability are small compared to historical variability, and are generally statistically insignificant. New farmer survey results are compared to findings from our climate analysis for the historic period, are used to interpret what MSD characteristics are of greatest interest locally, and are used for the development of adaptation strategies.

  5. Extended DBI massive gravity with generalized fiducial metric

    NASA Astrophysics Data System (ADS)

    Chullaphan, Tossaporn; Tannukij, Lunchakorn; Wongjun, Pitayuth

    2015-06-01

    We consider an extended model of DBI massive gravity by generalizing the fiducial metric to be an induced metric on the brane corresponding to a domain wall moving in five-dimensional Schwarzschild-Anti-de Sitter spacetime. The model admits all solutions of FLRW metric including flat, closed and open geometries while the original one does not. The background solutions can be divided into two branches namely self-accelerating branch and normal branch. For the self-accelerating branch, the graviton mass plays the role of cosmological constant to drive the late-time acceleration of the universe. It is found that the number degrees of freedom of gravitational sector is not correct similar to the original DBI massive gravity. There are only two propagating degrees of freedom from tensor modes. For normal branch, we restrict our attention to a particular class of the solutions which provides an accelerated expansion of the universe. It is found that the number of degrees of freedom in the model is correct. However, at least one of them is ghost degree of freedom which always present at small scale implying that the theory is not stable.

  6. Comparing the appropriate geographic region for assessing built environmental correlates with walking trips using different metrics and model approaches

    PubMed Central

    Tribby, Calvin P.; Miller, Harvey J.; Brown, Barbara B.; Smith, Ken R.; Werner, Carol M.

    2017-01-01

    There is growing international evidence that supportive built environments encourage active travel such as walking. An unsettled question is the role of geographic regions for analyzing the relationship between the built environment and active travel. This paper examines the geographic region question by assessing walking trip models that use two different regions: walking activity spaces and self-defined neighborhoods. We also use two types of built environment metrics, perceived and audit data, and two types of study design, cross-sectional and longitudinal, to assess these regions. We find that the built environment associations with walking are dependent on the type of metric and the type of model. Audit measures summarized within walking activity spaces better explain walking trips compared to audit measures within self-defined neighborhoods. Perceived measures summarized within self-defined neighborhoods have mixed results. Finally, results differ based on study design. This suggests that results may not be comparable among different regions, metrics and designs; researchers need to consider carefully these choices when assessing active travel correlates. PMID:28237743

  7. The use of alternative pollutant metrics in time-series studies of ambient air pollution and respiratory emergency department visits.

    PubMed

    Darrow, Lyndsey A; Klein, Mitchel; Sarnat, Jeremy A; Mulholland, James A; Strickland, Matthew J; Sarnat, Stefanie E; Russell, Armistead G; Tolbert, Paige E

    2011-01-01

    Various temporal metrics of daily pollution levels have been used to examine the relationships between air pollutants and acute health outcomes. However, daily metrics of the same pollutant have rarely been systematically compared within a study. In this analysis, we describe the variability of effect estimates attributable to the use of different temporal metrics of daily pollution levels. We obtained hourly measurements of ambient particulate matter (PM₂.₅), carbon monoxide (CO), nitrogen dioxide (NO₂), and ozone (O₃) from air monitoring networks in 20-county Atlanta for the time period 1993-2004. For each pollutant, we created (1) a daily 1-h maximum; (2) a 24-h average; (3) a commute average; (4) a daytime average; (5) a nighttime average; and (6) a daily 8-h maximum (only for O₃). Using Poisson generalized linear models, we examined associations between daily counts of respiratory emergency department visits and the previous day's pollutant metrics. Variability was greatest across O₃ metrics, with the 8-h maximum, 1-h maximum, and daytime metrics yielding strong positive associations and the nighttime O₃ metric yielding a negative association (likely reflecting confounding by air pollutants oxidized by O₃). With the exception of daytime metric, all of the CO and NO₂ metrics were positively associated with respiratory emergency department visits. Differences in observed associations with respiratory emergency room visits among temporal metrics of the same pollutant were influenced by the diurnal patterns of the pollutant, spatial representativeness of the metrics, and correlation between each metric and copollutant concentrations. Overall, the use of metrics based on the US National Ambient Air Quality Standards (for example, the use of a daily 8-h maximum O₃ as opposed to a 24-h average metric) was supported by this analysis. Comparative analysis of temporal metrics also provided insight into underlying relationships between specific air pollutants and respiratory health.

  8. Comparing SEBAL and METRIC: Evapotranspiration Models Applied to Paramount Farms Almond Orchards

    NASA Astrophysics Data System (ADS)

    Furey, B. J.; Kefauver, S. C.

    2011-12-01

    Two evapotranspiration models were applied to almond and pistachio orchards in California. The SEBAL model, developed by W.G.M. Bastiaanssen, was programmed in MatLab for direct comparison to the METRIC model, developed by R.G. Allen and the IDWR. Remote sensing data from the NASA SARP 2011 Airborne Research Program was used in the application of these models. An evaluation of the models showed that they both followed the same pattern in evapotranspiration (ET) rates for different types of ground cover. The models exhibited a slightly different range of values and appeared to be related (non-linearly). The models both underestimated the actual ET at the CIMIS weather station. However, SEBAL overestimated the ET of the almond orchards by 0.16 mm/hr when applying its crop coefficient to the reference ET. This is compared to METRIC, which underestimated the ET of the almond orchards by only 0.10 mm/hr. Other types of ground cover were similarly compared. Temporal variability in ET rates between the morning and afternoon were also observed.

  9. LANDSCAPE METRICS THAT ARE USEFUL FOR EXPLAINING ESTUARINE ECOLOGICAL RESPONSES

    EPA Science Inventory

    We investigated whether land use/cover characteristics of watersheds associated with estuaries exhibit a strong enough signal to make landscape metrics useful for predicting estuarine ecological condition. We used multivariate logistic regression models to discriminate between su...

  10. Shipboard Electrical System Modeling for Early-Stage Design Space Exploration

    DTIC Science & Technology

    2013-04-01

    method is demonstrated in several system studies. I. INTRODUCTION The integrated engineering plant ( IEP ) of an electric warship can be viewed as a...which it must operate [2], [4]. The desired IEP design should be dependable [5]. The operability metric has previously been defined as a measure of...the performance of an IEP during a specific scenario [2]. Dependability metrics have been derived from the operability metric as measures of the IEP

  11. Can spatial statistical river temperature models be transferred between catchments?

    NASA Astrophysics Data System (ADS)

    Jackson, Faye L.; Fryer, Robert J.; Hannah, David M.; Malcolm, Iain A.

    2017-09-01

    There has been increasing use of spatial statistical models to understand and predict river temperature (Tw) from landscape covariates. However, it is not financially or logistically feasible to monitor all rivers and the transferability of such models has not been explored. This paper uses Tw data from four river catchments collected in August 2015 to assess how well spatial regression models predict the maximum 7-day rolling mean of daily maximum Tw (Twmax) within and between catchments. Models were fitted for each catchment separately using (1) landscape covariates only (LS models) and (2) landscape covariates and an air temperature (Ta) metric (LS_Ta models). All the LS models included upstream catchment area and three included a river network smoother (RNS) that accounted for unexplained spatial structure. The LS models transferred reasonably to other catchments, at least when predicting relative levels of Twmax. However, the predictions were biased when mean Twmax differed between catchments. The RNS was needed to characterise and predict finer-scale spatially correlated variation. Because the RNS was unique to each catchment and thus non-transferable, predictions were better within catchments than between catchments. A single model fitted to all catchments found no interactions between the landscape covariates and catchment, suggesting that the landscape relationships were transferable. The LS_Ta models transferred less well, with particularly poor performance when the relationship with the Ta metric was physically implausible or required extrapolation outside the range of the data. A single model fitted to all catchments found catchment-specific relationships between Twmax and the Ta metric, indicating that the Ta metric was not transferable. These findings improve our understanding of the transferability of spatial statistical river temperature models and provide a foundation for developing new approaches for predicting Tw at unmonitored locations across multiple catchments and larger spatial scales.

  12. Assessing Quality of Data Standards: Framework and Illustration Using XBRL GAAP Taxonomy

    NASA Astrophysics Data System (ADS)

    Zhu, Hongwei; Wu, Harris

    The primary purpose of data standards or metadata schemas is to improve the interoperability of data created by multiple standard users. Given the high cost of developing data standards, it is desirable to assess the quality of data standards. We develop a set of metrics and a framework for assessing data standard quality. The metrics include completeness and relevancy. Standard quality can also be indirectly measured by assessing interoperability of data instances. We evaluate the framework using data from the financial sector: the XBRL (eXtensible Business Reporting Language) GAAP (Generally Accepted Accounting Principles) taxonomy and US Securities and Exchange Commission (SEC) filings produced using the taxonomy by approximately 500 companies. The results show that the framework is useful and effective. Our analysis also reveals quality issues of the GAAP taxonomy and provides useful feedback to taxonomy users. The SEC has mandated that all publicly listed companies must submit their filings using XBRL. Our findings are timely and have practical implications that will ultimately help improve the quality of financial data.

  13. Thought Leaders during Crises in Massive Social Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corley, Courtney D.; Farber, Robert M.; Reynolds, William

    The vast amount of social media data that can be gathered from the internet coupled with workflows that utilize both commodity systems and massively parallel supercomputers, such as the Cray XMT, open new vistas for research to support health, defense, and national security. Computer technology now enables the analysis of graph structures containing more than 4 billion vertices joined by 34 billion edges along with metrics and massively parallel algorithms that exhibit near-linear scalability according to number of processors. The challenge lies in making this massive data and analysis comprehensible to an analyst and end-users that require actionable knowledge tomore » carry out their duties. Simply stated, we have developed language and content agnostic techniques to reduce large graphs built from vast media corpora into forms people can understand. Specifically, our tools and metrics act as a survey tool to identify thought leaders' -- those members that lead or reflect the thoughts and opinions of an online community, independent of the source language.« less

  14. Technical Guidance for Constructing a Human Well-Being ...

    EPA Pesticide Factsheets

    The U.S. Environmental Protection Agency (EPA) Office of Research and Development’s Sustainable and Healthy Communities Research Program (EPA 2015) developed the Human Well-being Index (HWBI) as an integrative measure of economic, social, and environmental contributions to well-being. The HWBI is composed of indicators and metrics representing eight domains of well-being: connection to nature, cultural fulfillment, education, health, leisure time, living standards, safety and security, and social cohesion. The domains and indicators in the HWBI were selected to provide a well-being framework that is broadly applicable to many different populations and communities, and can be customized using community-specific metrics. A primary purpose of this report is to adapt the US Human Well-Being Index (HWBI) to quantify human well-being for Puerto Rico. Additionally, our adaptation of the HWBI for Puerto Rico provides an example of how the HWBI can be adapted to different communities and technical guidance on processing data and calculating index using R.

  15. New composite distributions for modeling industrial income and wealth per employee

    NASA Astrophysics Data System (ADS)

    Wiegand, Martin; Nadarajah, Saralees

    2018-02-01

    Forbes Magazine offers an annual list of the 2000 largest publicly traded companies, shedding light on four different measurements: Sales, profits, market value and assets held. Soriano-Hernández et al. (2017) modeled these wealth metrics using composite distributions made up of two parts. In this note, we introduce different composite distributions to more accurately describe the spread of these wealth metrics.

  16. Success Factors for e-Learning in a Developing Country: A Case Study of Serbia

    ERIC Educational Resources Information Center

    Raspopovic, Miroslava; Jankulovic, Aleksandar; Runic, Jovana; Lucic, Vanja

    2014-01-01

    In this paper, DeLone and McLean's updated information system model was used to evaluate the success of an e-Learning system and its courses in a transitional country like Serbia. In order to adapt this model to an e-Learning system, suitable success metrics were chosen for each of the evaluation stages. Furthermore, the success metrics for…

  17. Monitoring spatial variations in soil organic carbon using remote sensing and geographic information systems

    NASA Astrophysics Data System (ADS)

    Jaber, Salahuddin M.

    Soil organic carbon (SOC) sequestration is a component of larger strategies to control the accumulation of greenhouse gases that may be causing global warming. To implement this approach, it is necessary to improve the methods of measuring SOC content. Among these methods are indirect remote sensing and geographic information systems (GIS) techniques that are required to provide non-intrusive, low cost, and spatially continuous information that cover large areas on a repetitive basis. The main goal of this study is to evaluate the effects of using Hyperion hyperspectral data on improving the existing remote sensing and GIS-based methodologies for rapidly, efficiently, and accurately measuring SOC content on farmland. The study area is Big Creek Watershed (BCW) in Southern Illinois. The methodology consists of compiling a GIS database (consisting of remote sensing and soil variables) for 303 composite soil samples collected from representative pixels along the Hyperion coverage area of the watershed. Stepwise procedures were used to calibrate and validate linear multiple regression models where SOC was regarded as the response and the other remote sensing and soil variables as the predictors. Two models were selected. The first was the best all variables model and the second was the best only raster variables model. Map algebra was implemented to extrapolate the best only raster variables model and produce a SOC map for the BGW. This study concluded that Hyperion data marginally improved the predictability of the existing SOC statistical models based on multispectral satellite remote sensing sensors with correlation coefficient of 0.37 and root mean square error of 3.19 metric tons/hectare to a 15-cm depth. The total SOC pool of the study area is about 225,232 metric tons to 15-cm depth. The nonforested wetlands contained the highest SOC density (34.3 metric tons/hectare/15cm) with total SOC content of about 2,003.5 metric tons to 15-cm depth, where croplands had the lowest SOC density (21.6 metric tons/hectare/15cm) with total SOC content of about 44,571.2 metric tons to 15-cm depth.

  18. Surveillance metrics sensitivity study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamada, Michael S.; Bierbaum, Rene Lynn; Robertson, Alix A.

    2011-09-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculationsmore » and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.« less

  19. Surveillance Metrics Sensitivity Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bierbaum, R; Hamada, M; Robertson, A

    2011-11-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculationsmore » and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.« less

  20. A management-oriented framework for selecting metrics used to assess habitat- and path-specific quality in spatially structured populations

    USGS Publications Warehouse

    Nicol, Sam; Wiederholt, Ruscena; Diffendorfer, James E.; Mattsson, Brady; Thogmartin, Wayne E.; Semmens, Darius J.; Laura Lopez-Hoffman,; Norris, Ryan

    2016-01-01

    Mobile species with complex spatial dynamics can be difficult to manage because their population distributions vary across space and time, and because the consequences of managing particular habitats are uncertain when evaluated at the level of the entire population. Metrics to assess the importance of habitats and pathways connecting habitats in a network are necessary to guide a variety of management decisions. Given the many metrics developed for spatially structured models, it can be challenging to select the most appropriate one for a particular decision. To guide the management of spatially structured populations, we define three classes of metrics describing habitat and pathway quality based on their data requirements (graph-based, occupancy-based, and demographic-based metrics) and synopsize the ecological literature relating to these classes. Applying the first steps of a formal decision-making approach (problem framing, objectives, and management actions), we assess the utility of metrics for particular types of management decisions. Our framework can help managers with problem framing, choosing metrics of habitat and pathway quality, and to elucidate the data needs for a particular metric. Our goal is to help managers to narrow the range of suitable metrics for a management project, and aid in decision-making to make the best use of limited resources.

Top