Sample records for technology development metric

  1. Advanced Life Support Research and Technology Development Metric

    NASA Technical Reports Server (NTRS)

    Hanford, A. J.

    2004-01-01

    The Metric is one of several measures employed by the NASA to assess the Agency s progress as mandated by the United States Congress and the Office of Management and Budget. Because any measure must have a reference point, whether explicitly defined or implied, the Metric is a comparison between a selected ALS Project life support system and an equivalently detailed life support system using technology from the Environmental Control and Life Support System (ECLSS) for the International Space Station (ISS). This document provides the official calculation of the Advanced Life Support (ALS) Research and Technology Development Metric (the Metric) for Fiscal Year 2004. The values are primarily based on Systems Integration, Modeling, and Analysis (SIMA) Element approved software tools or reviewed and approved reference documents. For Fiscal Year 2004, the Advanced Life Support Research and Technology Development Metric value is 2.03 for an Orbiting Research Facility and 1.62 for an Independent Exploration Mission.

  2. Advanced Life Support Research and Technology Development Metric: Fiscal Year 2003

    NASA Technical Reports Server (NTRS)

    Hanford, A. J.

    2004-01-01

    This document provides the official calculation of the Advanced Life Support (ALS) Research and Technology Development Metric (the Metric) for Fiscal Year 2003. As such, the values herein are primarily based on Systems Integration, Modeling, and Analysis (SIMA) Element approved software tools or reviewed and approved reference documents. The Metric is one of several measures employed by the National Aeronautics and Space Administration (NASA) to assess the Agency s progress as mandated by the United States Congress and the Office of Management and Budget. Because any measure must have a reference point, whether explicitly defined or implied, the Metric is a comparison between a selected ALS Project life support system and an equivalently detailed life support system using technology from the Environmental Control and Life Support System (ECLSS) for the International Space Station (ISS). More specifically, the Metric is the ratio defined by the equivalent system mass (ESM) of a life support system for a specific mission using the ISS ECLSS technologies divided by the ESM for an equivalent life support system using the best ALS technologies. As defined, the Metric should increase in value as the ALS technologies become lighter, less power intensive, and require less volume. For Fiscal Year 2003, the Advanced Life Support Research and Technology Development Metric value is 1.47 for an Orbiting Research Facility and 1.36 for an Independent Exploration Mission.

  3. Technology transfer metrics: Measurement and verification of data/reusable launch vehicle business analysis

    NASA Technical Reports Server (NTRS)

    Trivoli, George W.

    1996-01-01

    Congress and the Executive Branch have mandated that all branches of the Federal Government exert a concentrated effort to transfer appropriate government and government contractor-developed technology to the industrial use in the U.S. economy. For many years, NASA has had a formal technology transfer program to transmit information about new technologies developed for space applications into the industrial or commercial sector. Marshall Space Flight Center (MSFC) has been in the forefront of the development of U.S. industrial assistance programs using technologies developed at the Center. During 1992-93, MSFC initiated a technology transfer metrics study. The MSFC study was the first of its kind among the various NASA centers. The metrics study is a continuing process, with periodic updates that reflect on-going technology transfer activities.

  4. Methodology to Calculate the ACE and HPQ Metrics Used in the Wave Energy Prize

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driscoll, Frederick R; Weber, Jochem W; Jenne, Dale S

    The U.S. Department of Energy's Wave Energy Prize Competition encouraged the development of innovative deep-water wave energy conversion technologies that at least doubled device performance above the 2014 state of the art. Because levelized cost of energy (LCOE) metrics are challenging to apply equitably to new technologies where significant uncertainty exists in design and operation, the prize technical team developed a reduced metric as proxy for LCOE, which provides an equitable comparison of low technology readiness level wave energy converter (WEC) concepts. The metric is called 'ACE' which is short for the ratio of the average climate capture width tomore » the characteristic capital expenditure. The methodology and application of the ACE metric used to evaluate the performance of the technologies that competed in the Wave Energy Prize are explained in this report.« less

  5. A consistent conceptual framework for applying climate metrics in technology life cycle assessment

    NASA Astrophysics Data System (ADS)

    Mallapragada, Dharik; Mignone, Bryan K.

    2017-07-01

    Comparing the potential climate impacts of different technologies is challenging for several reasons, including the fact that any given technology may be associated with emissions of multiple greenhouse gases when evaluated on a life cycle basis. In general, analysts must decide how to aggregate the climatic effects of different technologies, taking into account differences in the properties of the gases (differences in atmospheric lifetimes and instantaneous radiative efficiencies) as well as different technology characteristics (differences in emission factors and technology lifetimes). Available metrics proposed in the literature have incorporated these features in different ways and have arrived at different conclusions. In this paper, we develop a general framework for classifying metrics based on whether they measure: (a) cumulative or end point impacts, (b) impacts over a fixed time horizon or up to a fixed end year, and (c) impacts from a single emissions pulse or from a stream of pulses over multiple years. We then use the comparison between compressed natural gas and gasoline-fueled vehicles to illustrate how the choice of metric can affect conclusions about technologies. Finally, we consider tradeoffs involved in selecting a metric, show how the choice of metric depends on the framework that is assumed for climate change mitigation, and suggest which subset of metrics are likely to be most analytically self-consistent.

  6. Environmentally Responsible Aviation (ERA) Project - N+2 Advanced Vehicle Concepts Study and Conceptual Design of Subscale Test Vehicle (STV) Final Report

    NASA Technical Reports Server (NTRS)

    Bonet, John T.; Schellenger, Harvey G.; Rawdon, Blaine K.; Elmer, Kevin R.; Wakayama, Sean R.; Brown, Derrell L.; Guo, Yueping

    2011-01-01

    NASA has set demanding goals for technology developments to meet national needs to improve fuel efficiency concurrent with improving the environment to enable air transportation growth. A figure shows NASA's subsonic transport system metrics. The results of Boeing ERA N+2 Advanced Vehicle Concept Study show that the Blended Wing Body (BWB) vehicle, with ultra high bypass propulsion systems have the potential to meet the combined NASA ERA N+2 goals. This study had 3 main activities. 1) The development of an advanced vehicle concepts that can meet the NASA system level metrics. 2) Identification of key enabling technologies and the development of technology roadmaps and maturation plans. 3) The development of a subscale test vehicle that can demonstrate and mature the key enabling technologies needed to meet the NASA system level metrics. Technology maturation plans are presented and include key performance parameters and technical performance measures. The plans describe the risks that will be reduced with technology development and the expected progression of technical maturity.

  7. Image quality metrics for volumetric laser displays

    NASA Astrophysics Data System (ADS)

    Williams, Rodney D.; Donohoo, Daniel

    1991-08-01

    This paper addresses the extensions to the image quality metrics and related human factors research that are needed to establish the baseline standards for emerging volume display technologies. The existing and recently developed technologies for multiplanar volume displays are reviewed with an emphasis on basic human visual issues. Human factors image quality metrics and guidelines are needed to firmly establish this technology in the marketplace. The human visual requirements and the display design tradeoffs for these prototype laser-based volume displays are addressed and several critical image quality issues identified for further research. The American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSIHFS-100) and other international standards (ISO, DIN) can serve as a starting point, but this research base must be extended to provide new image quality metrics for this new technology for volume displays.

  8. Measuring Information Security: Guidelines to Build Metrics

    NASA Astrophysics Data System (ADS)

    von Faber, Eberhard

    Measuring information security is a genuine interest of security managers. With metrics they can develop their security organization's visibility and standing within the enterprise or public authority as a whole. Organizations using information technology need to use security metrics. Despite the clear demands and advantages, security metrics are often poorly developed or ineffective parameters are collected and analysed. This paper describes best practices for the development of security metrics. First attention is drawn to motivation showing both requirements and benefits. The main body of this paper lists things which need to be observed (characteristic of metrics), things which can be measured (how measurements can be conducted) and steps for the development and implementation of metrics (procedures and planning). Analysis and communication is also key when using security metrics. Examples are also given in order to develop a better understanding. The author wants to resume, continue and develop the discussion about a topic which is or increasingly will be a critical factor of success for any security managers in larger organizations.

  9. Conceptual Soundness, Metric Development, Benchmarking, and Targeting for PATH Subprogram Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosey. G.; Doris, E.; Coggeshall, C.

    The objective of this study is to evaluate the conceptual soundness of the U.S. Department of Housing and Urban Development (HUD) Partnership for Advancing Technology in Housing (PATH) program's revised goals and establish and apply a framework to identify and recommend metrics that are the most useful for measuring PATH's progress. This report provides an evaluative review of PATH's revised goals, outlines a structured method for identifying and selecting metrics, proposes metrics and benchmarks for a sampling of individual PATH programs, and discusses other metrics that potentially could be developed that may add value to the evaluation process. The frameworkmore » and individual program metrics can be used for ongoing management improvement efforts and to inform broader program-level metrics for government reporting requirements.« less

  10. VHA mental health information system: applying health information technology to monitor and facilitate implementation of VHA Uniform Mental Health Services Handbook requirements.

    PubMed

    Trafton, Jodie A; Greenberg, Greg; Harris, Alex H S; Tavakoli, Sara; Kearney, Lisa; McCarthy, John; Blow, Fredric; Hoff, Rani; Schohn, Mary

    2013-03-01

    To describe the design and deployment of health information technology to support implementation of mental health services policy requirements in the Veterans Health Administration (VHA). Using administrative and self-report survey data, we developed and fielded metrics regarding implementation of the requirements delineated in the VHA Uniform Mental Health Services Handbook. Finalized metrics were incorporated into 2 external facilitation-based quality improvement programs led by the VHA Mental Health Operations. To support these programs, tailored site-specific reports were generated. Metric development required close collaboration between program evaluators, policy makers and clinical leadership, and consideration of policy language and intent. Electronic reports supporting different purposes required distinct formatting and presentation features, despite their having similar general goals and using the same metrics. Health information technology can facilitate mental health policy implementation but must be integrated into a process of consensus building and close collaboration with policy makers, evaluators, and practitioners.

  11. Overview of Mars Technology Program

    NASA Technical Reports Server (NTRS)

    Hayati, Samad A.

    2006-01-01

    This viewgraph presentation reviews the development of a technology program leading to Mars missions. The presentation includes: the goals of technology program, elements of technology program, program metrics, major accomplishments, examples and Information about the Mars Technology Program.

  12. Development of Management Metrics for Research and Technology

    NASA Technical Reports Server (NTRS)

    Sheskin, Theodore J.

    2003-01-01

    Professor Ted Sheskin from CSU will be tasked to research and investigate metrics that can be used to determine the technical progress for advanced development and research tasks. These metrics will be implemented in a software environment that hosts engineering design, analysis and management tools to be used to support power system and component research work at GRC. Professor Sheskin is an Industrial Engineer and has been involved in issues related to management of engineering tasks and will use his knowledge from this area to allow extrapolation into the research and technology management area. Over the course of the summer, Professor Sheskin will develop a bibliography of management papers covering current management methods that may be applicable to research management. At the completion of the summer work we expect to have him recommend a metric system to be reviewed prior to implementation in the software environment. This task has been discussed with Professor Sheskin and some review material has already been given to him.

  13. Test and Evaluation Metrics of Crew Decision-Making And Aircraft Attitude and Energy State Awareness

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Ellis, Kyle K. E.; Stephens, Chad L.

    2013-01-01

    NASA has established a technical challenge, under the Aviation Safety Program, Vehicle Systems Safety Technologies project, to improve crew decision-making and response in complex situations. The specific objective of this challenge is to develop data and technologies which may increase a pilot's (crew's) ability to avoid, detect, and recover from adverse events that could otherwise result in accidents/incidents. Within this technical challenge, a cooperative industry-government research program has been established to develop innovative flight deck-based counter-measures that can improve the crew's ability to avoid, detect, mitigate, and recover from unsafe loss-of-aircraft state awareness - specifically, the loss of attitude awareness (i.e., Spatial Disorientation, SD) or the loss-of-energy state awareness (LESA). A critical component of this research is to develop specific and quantifiable metrics which identify decision-making and the decision-making influences during simulation and flight testing. This paper reviews existing metrics and methods for SD testing and criteria for establishing visual dominance. The development of Crew State Monitoring technologies - eye tracking and other psychophysiological - are also discussed as well as emerging new metrics for identifying channelized attention and excessive pilot workload, both of which have been shown to contribute to SD/LESA accidents or incidents.

  14. Non-Intrusive Load Monitoring Assessment: Literature Review and Laboratory Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butner, R. Scott; Reid, Douglas J.; Hoffman, Michael G.

    2013-07-01

    To evaluate the accuracy of NILM technologies, a literature review was conducted to identify any test protocols or standardized testing approaches currently in use. The literature review indicated that no consistent conventions were currently in place for measuring the accuracy of these technologies. Consequently, PNNL developed a testing protocol and metrics to provide the basis for quantifying and analyzing the accuracy of commercially available NILM technologies. This report discusses the results of the literature review and the proposed test protocol and metrics in more detail.

  15. Deploying Innovation

    Science.gov Websites

    ; Sponsored Work Regional Economic Development Technology Opportunities User Facilities About Us Metrics In diverse economic development. With an integrated portfolio of R&D work, we leverage partnerships with Partnerships & Sponsored Work Regional Economic Development Technology Opportunities User Facilities

  16. Development and Implementation of a Design Metric for Systems Containing Long-Term Fluid Loops

    NASA Technical Reports Server (NTRS)

    Steele, John W.

    2016-01-01

    John Steele, a chemist and technical fellow from United Technologies Corporation, provided a water quality module to assist engineers and scientists with a metric tool to evaluate risks associated with the design of space systems with fluid loops. This design metric is a methodical, quantitative, lessons-learned based means to evaluate the robustness of a long-term fluid loop system design. The tool was developed by a cross-section of engineering disciplines who had decades of experience and problem resolution.

  17. Environmental Decision Support with Consistent Metrics

    EPA Science Inventory

    One of the most effective ways to pursue environmental progress is through the use of consistent metrics within a decision making framework. The US Environmental Protection Agency’s Sustainable Technology Division has developed TRACI, the Tool for the Reduction and Assessment of...

  18. Performance Evaluation Methods for Assistive Robotic Technology

    NASA Astrophysics Data System (ADS)

    Tsui, Katherine M.; Feil-Seifer, David J.; Matarić, Maja J.; Yanco, Holly A.

    Robots have been developed for several assistive technology domains, including intervention for Autism Spectrum Disorders, eldercare, and post-stroke rehabilitation. Assistive robots have also been used to promote independent living through the use of devices such as intelligent wheelchairs, assistive robotic arms, and external limb prostheses. Work in the broad field of assistive robotic technology can be divided into two major research phases: technology development, in which new devices, software, and interfaces are created; and clinical, in which assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots poses a related set of challenges. In this paper, we survey several areas of assistive robotic technology in order to derive and demonstrate domain-specific means for evaluating the performance of such systems. We also present two case studies of applied performance measures and a discussion regarding the ubiquity of functional performance measures across the sampled domains. Finally, we present guidelines for incorporating human performance metrics into end-user evaluations of assistive robotic technologies.

  19. Performation Metrics Development Analysis for Information and Communications Technology Outsourcing: A Case Study

    ERIC Educational Resources Information Center

    Travis, James L., III

    2014-01-01

    This study investigated how and to what extent the development and use of the OV-5a operational architecture decomposition tree (OADT) from the Department of Defense (DoD) Architecture Framework (DoDAF) affects requirements analysis with respect to complete performance metrics for performance-based services acquisition of ICT under rigid…

  20. Systems Engineering Techniques for ALS Decision Making

    NASA Technical Reports Server (NTRS)

    Rodriquez, Luis F.; Drysdale, Alan E.; Jones, Harry; Levri, Julie A.

    2004-01-01

    The Advanced Life Support (ALS) Metric is the predominant tool for predicting the cost of ALS systems. Metric goals for the ALS Program are daunting, requiring a threefold increase in the ALS Metric by 2010. Confounding the problem, the rate new ALS technologies reach the maturity required for consideration in the ALS Metric and the rate at which new configurations are developed is slow, limiting the search space and potentially giving the perspective of a ALS technology, the ALS Metric may remain elusive. This paper is a sequel to a paper published in the proceedings of the 2003 ICES conference entitled, "Managing to the metric: an approach to optimizing life support costs." The conclusions of that paper state that the largest contributors to the ALS Metric should be targeted by ALS researchers and management for maximum metric reductions. Certainly, these areas potentially offer large potential benefits to future ALS missions; however, the ALS Metric is not the only decision-making tool available to the community. To facilitate decision-making within the ALS community a combination of metrics should be utilized, such as the Equivalent System Mass (ESM)-based ALS metric, but also those available through techniques such as life cycle costing and faithful consideration of the sensitivity of the assumed models and data. Often a lack of data is cited as the reason why these techniques are not considered for utilization. An existing database development effort within the ALS community, known as OPIS, may provide the opportunity to collect the necessary information to enable the proposed systems analyses. A review of these additional analysis techniques is provided, focusing on the data necessary to enable these. The discussion is concluded by proposing how the data may be utilized by analysts in the future.

  1. Load Disaggregation Technologies: Real World and Laboratory Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony T.; Sullivan, Greg P.; Petersen, Joseph M.

    Low cost interval metering and communication technology improvements over the past ten years have enabled the maturity of load disaggregation (or non-intrusive load monitoring) technologies to better estimate and report energy consumption of individual end-use loads. With the appropriate performance characteristics, these technologies have the potential to enable many utility and customer facing applications such as billing transparency, itemized demand and energy consumption, appliance diagnostics, commissioning, energy efficiency savings verification, load shape research, and demand response measurement. However, there has been much skepticism concerning the ability of load disaggregation products to accurately identify and estimate energy consumption of end-uses; whichmore » has hindered wide-spread market adoption. A contributing factor is that common test methods and metrics are not available to evaluate performance without having to perform large scale field demonstrations and pilots, which can be costly when developing such products. Without common and cost-effective methods of evaluation, more developed disaggregation technologies will continue to be slow to market and potential users will remain uncertain about their capabilities. This paper reviews recent field studies and laboratory tests of disaggregation technologies. Several factors are identified that are important to consider in test protocols, so that the results reflect real world performance. Potential metrics are examined to highlight their effectiveness in quantifying disaggregation performance. This analysis is then used to suggest performance metrics that are meaningful and of value to potential users and that will enable researchers/developers to identify beneficial ways to improve their technologies.« less

  2. Key Metrics and Goals for NASA's Advanced Air Transportation Technologies Program

    NASA Technical Reports Server (NTRS)

    Kaplan, Bruce; Lee, David

    1998-01-01

    NASA's Advanced Air Transportation Technologies (AATT) program is developing a set of decision support tools to aid air traffic service providers, pilots, and airline operations centers in improving operations of the National Airspace System (NAS). NASA needs a set of unifying metrics to tie these efforts together, which it can use to track the progress of the AATT program and communicate program objectives and status within NASA and to stakeholders in the NAS. This report documents the results of our efforts and the four unifying metrics we recommend for the AATT program. They are: airport peak capacity, on-route sector capacity, block time and fuel, and free flight-enabling.

  3. Advanced Life Support System Value Metric

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Arnold, James O. (Technical Monitor)

    1999-01-01

    The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have reached a consensus. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is then set accordingly. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, [SVM + TRL]/ESM, with appropriate weighting and scaling. The total value is the sum of SVM and TRL. Cost is represented by ESM. The paper provides a detailed description and example application of the suggested System Value Metric.

  4. Advanced Life Support System Value Metric

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Rasky, Daniel J. (Technical Monitor)

    1999-01-01

    The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have led to the following approach. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are considered to be exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is defined after many trade-offs. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, SVM/[ESM + function (TRL)], with appropriate weighting and scaling. The total value is given by SVM. Cost is represented by higher ESM and lower TRL. The paper provides a detailed description and example application of a suggested System Value Metric and an overall ALS system metric.

  5. Quantitative Analysis of Technological Innovation in Knee Arthroplasty: Using Patent and Publication Metrics to Identify Developments and Trends.

    PubMed

    Dalton, David M; Burke, Thomas P; Kelly, Enda G; Curtin, Paul D

    2016-06-01

    Surgery is in a constant continuum of innovation with refinement of technique and instrumentation. Arthroplasty surgery potentially represents an area with highly innovative process. This study highlights key area of innovation in knee arthroplasty over the past 35 years using patent and publication metrics. Growth rates and patterns are analyzed. Patents are correlated to publications as a measure of scientific support. Electronic patent and publication databases were searched over the interval 1980-2014 for "knee arthroplasty" OR "knee replacement." The resulting patent codes were allocated into technology clusters. Citation analysis was performed to identify any important developments missed on initial analysis. The technology clusters identified were further analyzed, individual repeat searches performed, and growth curves plotted. The initial search revealed 3574 patents and 16,552 publications. The largest technology clusters identified were Unicompartmental, Patient-Specific Instrumentation (PSI), Navigation, and Robotic knee arthroplasties. The growth in patent activity correlated strongly with publication activity (Pearson correlation value 0.892, P < .01), but was growing at a faster rate suggesting a decline in vigilance. PSI, objectively the fastest growing technology in the last 5 years, is currently in a period of exponential growth that began a decade ago. Established technologies in the study have double s-shaped patent curves. Identifying trends in emerging technologies is possible using patent metrics and is useful information for training and regulatory bodies. The decline in ratio of publications to patents and the uninterrupted growth of PSI are developments that may warrant further investigation. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Development of Metric for Measuring the Impact of RD&D Funding on GTO's Geothermal Exploration Goals (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenne, S.; Young, K. R.; Thorsteinsson, H.

    The Department of Energy's Geothermal Technologies Office (GTO) provides RD&D funding for geothermal exploration technologies with the goal of lowering the risks and costs of geothermal development and exploration. In 2012, NREL was tasked with developing a metric to measure the impacts of this RD&D funding on the cost and time required for exploration activities. The development of this metric included collecting cost and time data for exploration techniques, creating a baseline suite of exploration techniques to which future exploration and cost and time improvements could be compared, and developing an online tool for graphically showing potential project impacts (allmore » available at http://en.openei.org/wiki/Gateway:Geothermal). The conference paper describes the methodology used to define the baseline exploration suite of techniques (baseline), as well as the approach that was used to create the cost and time data set that populates the baseline. The resulting product, an online tool for measuring impact, and the aggregated cost and time data are available on the Open EI website for public access (http://en.openei.org).« less

  7. Assessment of Navigation Using a Hybrid Cognitive/Metric World Model

    DTIC Science & Technology

    2015-01-01

    The robot failed to avoid the stairs of the church. Table A-26 Assessment of vignette 1, path 6b, by researcher TBS Navigate left of the...NOTES 14. ABSTRACT One goal of the US Army Research Laboratory’s Robotic Collaborative Technology Alliance is to develop a cognitive architecture...that would allow a robot to operate on both the semantic and metric levels. As such, both symbolic and metric information would be interpreted within

  8. Propulsion Technology Lifecycle Operational Analysis

    NASA Technical Reports Server (NTRS)

    Robinson, John W.; Rhodes, Russell E.

    2010-01-01

    The paper presents the results of a focused effort performed by the members of the Space Propulsion Synergy Team (SPST) Functional Requirements Sub-team to develop propulsion data to support Advanced Technology Lifecycle Analysis System (ATLAS). This is a spreadsheet application to analyze the impact of technology decisions at a system-of-systems level. Results are summarized in an Excel workbook we call the Technology Tool Box (TTB). The TTB provides data for technology performance, operations, and programmatic parameters in the form of a library of technical information to support analysis tools and/or models. The lifecycle of technologies can be analyzed from this data and particularly useful for system operations involving long running missions. The propulsion technologies in this paper are listed against Chemical Rocket Engines in a Work Breakdown Structure (WBS) format. The overall effort involved establishing four elements: (1) A general purpose Functional System Breakdown Structure (FSBS). (2) Operational Requirements for Rocket Engines. (3) Technology Metric Values associated with Operating Systems (4) Work Breakdown Structure (WBS) of Chemical Rocket Engines The list of Chemical Rocket Engines identified in the WBS is by no means complete. It is planned to update the TTB with a more complete list of available Chemical Rocket Engines for United States (US) engines and add the Foreign rocket engines to the WBS which are available to NASA and the Aerospace Industry. The Operational Technology Metric Values were derived by the SPST Sub-team in the form of the TTB and establishes a database for users to help evaluate and establish the technology level of each Chemical Rocket Engine in the database. The Technology Metric Values will serve as a guide to help determine which rocket engine to invest technology money in for future development.

  9. Quantitative metrics for assessment of chemical image quality and spatial resolution

    DOE PAGES

    Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.

    2016-02-28

    Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less

  10. Quantitative metrics for assessment of chemical image quality and spatial resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.

    Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less

  11. Economic Metrics for Commercial Reusable Space Transportation Systems

    NASA Technical Reports Server (NTRS)

    Shaw, Eric J.; Hamaker, Joseph (Technical Monitor)

    2000-01-01

    The success of any effort depends upon the effective initial definition of its purpose, in terms of the needs to be satisfied and the goals to be fulfilled. If the desired product is "A System" that is well-characterized, these high-level need and goal statements can be transformed into system requirements by traditional systems engineering techniques. The satisfaction of well-designed requirements can be tracked by fairly straightforward cost, schedule, and technical performance metrics. Unfortunately, some types of efforts, including those that NASA terms "Programs," tend to resist application of traditional systems engineering practices. In the NASA hierarchy of efforts, a "Program" is often an ongoing effort with broad, high-level goals and objectives. A NASA "project" is a finite effort, in terms of budget and schedule, that usually produces or involves one System. Programs usually contain more than one project and thus more than one System. Special care must be taken in the formulation of NASA Programs and their projects, to ensure that lower-level project requirements are traceable to top-level Program goals, feasible with the given cost and schedule constraints, and measurable against top-level goals. NASA Programs and projects are tasked to identify the advancement of technology as an explicit goal, which introduces more complicating factors. The justification for funding of technology development may be based on the technology's applicability to more than one System, Systems outside that Program or even external to NASA. Application of systems engineering to broad-based technology development, leading to effective measurement of the benefits, can be valid, but it requires that potential beneficiary Systems be organized into a hierarchical structure, creating a "system of Systems." In addition, these Systems evolve with the successful application of the technology, which creates the necessity for evolution of the benefit metrics to reflect the changing baseline. Still, economic metrics for technology development in these Programs and projects remain fairly straightforward, being based on reductions in acquisition and operating costs of the Systems. One of the most challenging requirements that NASA levies on its Programs is to plan for the commercialization of the developed technology. Some NASA Programs are created for the express purpose of developing technology for a particular industrial sector, such as aviation or space transportation, in financial partnership with that sector. With industrial investment, another set of goals, constraints and expectations are levied on the technology program. Economic benefit metrics then expand beyond cost and cost savings to include the marketability, profit, and investment return requirements of the private sector. Commercial investment criteria include low risk, potential for high return, and strategic alignment with existing product lines. These corporate criteria derive from top-level strategic plans and investment goals, which rank high among the most proprietary types of information in any business. As a result, top-level economic goals and objectives that industry partners bring to cooperative programs cannot usually be brought into technical processes, such as systems engineering, that are worked collaboratively between Industry and Government. In spite of these handicaps, the top-level economic goals and objectives of a joint technology program can be crafted in such a way that they accurately reflect the fiscal benefits from both Industry and Government perspectives. Valid economic metrics can then be designed that can track progress toward these goals and objectives, while maintaining the confidentiality necessary for the competitive process.

  12. Building TPACK in Preservice Teachers through Explicit Course Design

    ERIC Educational Resources Information Center

    Harvey, Douglas M.; Caro, Ronald

    2017-01-01

    The authors of this study utilized the TPACK (Technological, Pedagogical, and Content Knowledge) framework in developing and assessing these skills within an advanced technology integration course for preservice teachers. The research contributes to the use of TPACK as a metric for measuring technology integration of pre-service teachers. Two…

  13. Improving early cycle economic evaluation of diagnostic technologies.

    PubMed

    Steuten, Lotte M G; Ramsey, Scott D

    2014-08-01

    The rapidly increasing range and expense of new diagnostics, compels consideration of a different, more proactive approach to health economic evaluation of diagnostic technologies. Early cycle economic evaluation is a decision analytic approach to evaluate technologies in development so as to increase the return on investment as well as patient and societal impact. This paper describes examples of 'early cycle economic evaluations' as applied to diagnostic technologies and highlights challenges in its real-time application. It shows that especially in the field of diagnostics, with rapid technological developments and a changing regulatory climate, early cycle economic evaluation can have a guiding role to improve the efficiency of the diagnostics innovation process. In the next five years the attention will move beyond the methodological and analytic challenges of early cycle economic evaluation towards the challenge of effectively applying it to improve diagnostic research and development and patient value. Future work in this area should therefore be 'strong on principles and soft on metrics', that is, the metrics that resonate most clearly with the various decision makers in this field.

  14. Multidisciplinary life cycle metrics and tools for green buildings.

    PubMed

    Helgeson, Jennifer F; Lippiatt, Barbara C

    2009-07-01

    Building sector stakeholders need compelling metrics, tools, data, and case studies to support major investments in sustainable technologies. Proponents of green building widely claim that buildings integrating sustainable technologies are cost effective, but often these claims are based on incomplete, anecdotal evidence that is difficult to reproduce and defend. The claims suffer from 2 main weaknesses: 1) buildings on which claims are based are not necessarily "green" in a science-based, life cycle assessment (LCA) sense and 2) measures of cost effectiveness often are not based on standard methods for measuring economic worth. Yet, the building industry demands compelling metrics to justify sustainable building designs. The problem is hard to solve because, until now, neither methods nor robust data supporting defensible business cases were available. The US National Institute of Standards and Technology (NIST) Building and Fire Research Laboratory is beginning to address these needs by developing metrics and tools for assessing the life cycle economic and environmental performance of buildings. Economic performance is measured with the use of standard life cycle costing methods. Environmental performance is measured by LCA methods that assess the "carbon footprint" of buildings, as well as 11 other sustainability metrics, including fossil fuel depletion, smog formation, water use, habitat alteration, indoor air quality, and effects on human health. Carbon efficiency ratios and other eco-efficiency metrics are established to yield science-based measures of the relative worth, or "business cases," for green buildings. Here, the approach is illustrated through a realistic building case study focused on different heating, ventilation, air conditioning technology energy efficiency. Additionally, the evolution of the Building for Environmental and Economic Sustainability multidisciplinary team and future plans in this area are described.

  15. THE MAQC PROJECT: ESTABLISHING QC METRICS AND THRESHOLDS FOR MICROARRAY QUALITY CONTROL

    EPA Science Inventory

    Microarrays represent a core technology in pharmacogenomics and toxicogenomics; however, before this technology can successfully and reliably be applied in clinical practice and regulatory decision-making, standards and quality measures need to be developed. The Microarray Qualit...

  16. Measures of International Manufacturing and Trade of Clean Energy Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel-Cox, Jill; Sandor, Debbie; Keyser, David

    The technologies that produce clean energy, such as solar photovoltaic panels and lithium ion batteries for electric vehicles, are globally manufactured and traded. As demand and deployment of these technologies grows exponentially, the innovation to reach significant economies of scale and drive down energy production costs becomes less in the technology and more in the manufacturing of the technology. Manufacturing innovations and other manufacturing decisions can reduce costs of labor, materials, equipment, operating costs, and transportation, across all the links in the supply chain. To better understand the manufacturing aspect of the clean energy economy, we have developed key metricsmore » for systematically measuring and benchmarking international manufacturing of clean energy technologies. The metrics are: trade, market size, manufacturing value-added, and manufacturing capacity and production. These metrics were applied to twelve global economies and four representative technologies: wind turbine components, crystalline silicon solar photovoltaic modules, vehicle lithium ion battery cells, and light emitting diode packages for efficient lighting and other consumer products. The results indicated that clean energy technologies are being developed via complex, dynamic, and global supply chains, with individual economies benefiting from different technologies and links in the supply chain, through both domestic manufacturing and global trade.« less

  17. Leveraging multi-channel x-ray detector technology to improve quality metrics for industrial and security applications

    NASA Astrophysics Data System (ADS)

    Jimenez, Edward S.; Thompson, Kyle R.; Stohn, Adriana; Goodner, Ryan N.

    2017-09-01

    Sandia National Laboratories has recently developed the capability to acquire multi-channel radio- graphs for multiple research and development applications in industry and security. This capability allows for the acquisition of x-ray radiographs or sinogram data to be acquired at up to 300 keV with up to 128 channels per pixel. This work will investigate whether multiple quality metrics for computed tomography can actually benefit from binned projection data compared to traditionally acquired grayscale sinogram data. Features and metrics to be evaluated include the ability to dis- tinguish between two different materials with similar absorption properties, artifact reduction, and signal-to-noise for both raw data and reconstructed volumetric data. The impact of this technology to non-destructive evaluation, national security, and industry is wide-ranging and has to potential to improve upon many inspection methods such as dual-energy methods, material identification, object segmentation, and computer vision on radiographs.

  18. Requirement Metrics for Risk Identification

    NASA Technical Reports Server (NTRS)

    Hammer, Theodore; Huffman, Lenore; Wilson, William; Rosenberg, Linda; Hyatt, Lawrence

    1996-01-01

    The Software Assurance Technology Center (SATC) is part of the Office of Mission Assurance of the Goddard Space Flight Center (GSFC). The SATC's mission is to assist National Aeronautics and Space Administration (NASA) projects to improve the quality of software which they acquire or develop. The SATC's efforts are currently focused on the development and use of metric methodologies and tools that identify and assess risks associated with software performance and scheduled delivery. This starts at the requirements phase, where the SATC, in conjunction with software projects at GSFC and other NASA centers is working to identify tools and metric methodologies to assist project managers in identifying and mitigating risks. This paper discusses requirement metrics currently being used at NASA in a collaborative effort between the SATC and the Quality Assurance Office at GSFC to utilize the information available through the application of requirements management tools.

  19. Geothermal Life Cycle Calculator

    DOE Data Explorer

    Sullivan, John

    2014-03-11

    This calculator is a handy tool for interested parties to estimate two key life cycle metrics, fossil energy consumption (Etot) and greenhouse gas emission (ghgtot) ratios, for geothermal electric power production. It is based solely on data developed by Argonne National Laboratory for DOE’s Geothermal Technologies office. The calculator permits the user to explore the impact of a range of key geothermal power production parameters, including plant capacity, lifetime, capacity factor, geothermal technology, well numbers and depths, field exploration, and others on the two metrics just mentioned. Estimates of variations in the results are also available to the user.

  20. A Multi-scale, Multi-Model, Machine-Learning Solar Forecasting Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamann, Hendrik F.

    The goal of the project was the development and demonstration of a significantly improved solar forecasting technology (short: Watt-sun), which leverages new big data processing technologies and machine-learnt blending between different models and forecast systems. The technology aimed demonstrating major advances in accuracy as measured by existing and new metrics which themselves were developed as part of this project. Finally, the team worked with Independent System Operators (ISOs) and utilities to integrate the forecasts into their operations.

  1. Development of Technology Transfer Economic Growth Metrics

    NASA Technical Reports Server (NTRS)

    Mastrangelo, Christina M.

    1998-01-01

    The primary objective of this project is to determine the feasibility of producing technology transfer metrics that answer the question: Do NASA/MSFC technical assistance activities impact economic growth? The data for this project resides in a 7800-record database maintained by Tec-Masters, Incorporated. The technology assistance data results from survey responses from companies and individuals who have interacted with NASA via a Technology Transfer Agreement, or TTA. The goal of this project was to determine if the existing data could provide indications of increased wealth. This work demonstrates that there is evidence that companies that used NASA technology transfer have a higher job growth rate than the rest of the economy. It also shows that the jobs being supported are jobs in higher wage SIC codes, and this indicates improvements in personal wealth. Finally, this work suggests that with correct data, the wealth issue may be addressed.

  2. Candidate control design metrics for an agile fighter

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Bailey, Melvin L.; Ostroff, Aaron J.

    1991-01-01

    Success in the fighter combat environment of the future will certainly demand increasing capability from aircraft technology. These advanced capabilities in the form of superagility and supermaneuverability will require special design techniques which translate advanced air combat maneuvering requirements into design criteria. Control design metrics can provide some of these techniques for the control designer. Thus study presents an overview of control design metrics and investigates metrics for advanced fighter agility. The objectives of various metric users, such as airframe designers and pilots, are differentiated from the objectives of the control designer. Using an advanced fighter model, metric values are documented over a portion of the flight envelope through piloted simulation. These metric values provide a baseline against which future control system improvements can be compared and against which a control design methodology can be developed. Agility is measured for axial, pitch, and roll axes. Axial metrics highlight acceleration and deceleration capabilities under different flight loads and include specific excess power measurements to characterize energy meneuverability. Pitch metrics cover both body-axis and wind-axis pitch rates and accelerations. Included in pitch metrics are nose pointing metrics which highlight displacement capability between the nose and the velocity vector. Roll metrics (or torsion metrics) focus on rotational capability about the wind axis.

  3. Development of an Expanded, High Reliability Cost and Performance Database for In Situ Remediation Technologies

    DTIC Science & Technology

    2016-03-01

    Performance Metrics University of Waterloo Permanganate Treatment of an Emplaced DNAPL Source (Thomson et al., 2007) Table 5.6 Remediation Performance Data... permanganate vs. peroxide/Fenton’s for chemical oxidation).  Poorer performance was generally observed when the Total CVOC was the contaminant metric...using a soluble carbon substrate (lactate), chemical oxidation using Fenton’s reagent, and chemical oxidation using potassium permanganate . At

  4. Climate Data Analytics Workflow Management

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Lee, S.; Pan, L.; Mattmann, C. A.; Lee, T. J.

    2016-12-01

    In this project we aim to pave a novel path to create a sustainable building block toward Earth science big data analytics and knowledge sharing. Closely studying how Earth scientists conduct data analytics research in their daily work, we have developed a provenance model to record their activities, and to develop a technology to automatically generate workflows for scientists from the provenance. On top of it, we have built the prototype of a data-centric provenance repository, and establish a PDSW (People, Data, Service, Workflow) knowledge network to support workflow recommendation. To ensure the scalability and performance of the expected recommendation system, we have leveraged the Apache OODT system technology. The community-approved, metrics-based performance evaluation web-service will allow a user to select a metric from the list of several community-approved metrics and to evaluate model performance using the metric as well as the reference dataset. This service will facilitate the use of reference datasets that are generated in support of the model-data intercomparison projects such as Obs4MIPs and Ana4MIPs. The data-centric repository infrastructure will allow us to catch richer provenance to further facilitate knowledge sharing and scientific collaboration in the Earth science community. This project is part of Apache incubator CMDA project.

  5. NASA(Field Center Based) Technology Commercialization Centers

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Under the direction of the IC(sup 2) Institute, the Johnson Technology Commercialization Center has met or exceeded all planned milestones and metrics during the first two and a half years of the NTCC program. The Center has established itself as an agent for technology transfer and economic development in- the Clear Lake community, and is positioned to continue as a stand-alone operation. This report presents data on the experimental JTCC program, including all objective measures tracked over its duration. While the metrics are all positive, the data indicates a shortage of NASA technologies with strong commercial potential, barriers to the identification and transfer of technologies which may have potential, and small financial return to NASA via royalty-bearing licenses. The Center has not yet reached the goal of self-sufficiency based on rental income, and remains dependent on NASA funding. The most important issues raised by the report are the need for broader and deeper community participation in the Center, technology sourcing beyond JSC, and the form of future funding which will be appropriate.

  6. Performance analysis of three-dimensional ridge acquisition from live finger and palm surface scans

    NASA Astrophysics Data System (ADS)

    Fatehpuria, Abhishika; Lau, Daniel L.; Yalla, Veeraganesh; Hassebrook, Laurence G.

    2007-04-01

    Fingerprints are one of the most commonly used and relied-upon biometric technology. But often the captured fingerprint image is far from ideal due to imperfect acquisition techniques that can be slow and cumbersome to use without providing complete fingerprint information. Most of the diffculties arise due to the contact of the fingerprint surface with the sensor platen. To overcome these diffculties we have been developing a noncontact scanning system for acquiring a 3-D scan of a finger with suffciently high resolution which is then converted into a 2-D rolled equivalent image. In this paper, we describe certain quantitative measures evaluating scanner performance. Specifically, we use some image software components developed by the National Institute of Standards and Technology, to derive our performance metrics. Out of the eleven identified metrics, three were found to be most suitable for evaluating scanner performance. A comparison is also made between 2D fingerprint images obtained by the traditional means and the 2D images obtained after unrolling the 3D scans and the quality of the acquired scans is quantified using the metrics.

  7. Comparing de novo genome assembly: the long and short of it.

    PubMed

    Narzisi, Giuseppe; Mishra, Bud

    2011-04-29

    Recent advances in DNA sequencing technology and their focal role in Genome Wide Association Studies (GWAS) have rekindled a growing interest in the whole-genome sequence assembly (WGSA) problem, thereby, inundating the field with a plethora of new formalizations, algorithms, heuristics and implementations. And yet, scant attention has been paid to comparative assessments of these assemblers' quality and accuracy. No commonly accepted and standardized method for comparison exists yet. Even worse, widely used metrics to compare the assembled sequences emphasize only size, poorly capturing the contig quality and accuracy. This paper addresses these concerns: it highlights common anomalies in assembly accuracy through a rigorous study of several assemblers, compared under both standard metrics (N50, coverage, contig sizes, etc.) as well as a more comprehensive metric (Feature-Response Curves, FRC) that is introduced here; FRC transparently captures the trade-offs between contigs' quality against their sizes. For this purpose, most of the publicly available major sequence assemblers--both for low-coverage long (Sanger) and high-coverage short (Illumina) reads technologies--are compared. These assemblers are applied to microbial (Escherichia coli, Brucella, Wolbachia, Staphylococcus, Helicobacter) and partial human genome sequences (Chr. Y), using sequence reads of various read-lengths, coverages, accuracies, and with and without mate-pairs. It is hoped that, based on these evaluations, computational biologists will identify innovative sequence assembly paradigms, bioinformaticists will determine promising approaches for developing "next-generation" assemblers, and biotechnologists will formulate more meaningful design desiderata for sequencing technology platforms. A new software tool for computing the FRC metric has been developed and is available through the AMOS open-source consortium.

  8. Digital Elevation Model from Non-Metric Camera in Uas Compared with LIDAR Technology

    NASA Astrophysics Data System (ADS)

    Dayamit, O. M.; Pedro, M. F.; Ernesto, R. R.; Fernando, B. L.

    2015-08-01

    Digital Elevation Model (DEM) data as a representation of surface topography is highly demanded for use in spatial analysis and modelling. Aimed to that issue many methods of acquisition data and process it are developed, from traditional surveying until modern technology like LIDAR. On the other hands, in a past four year the development of Unamend Aerial System (UAS) aimed to Geomatic bring us the possibility to acquire data about surface by non-metric digital camera on board in a short time with good quality for some analysis. Data collectors have attracted tremendous attention on UAS due to possibility of the determination of volume changes over time, monitoring of the breakwaters, hydrological modelling including flood simulation, drainage networks, among others whose support in DEM for proper analysis. The DEM quality is considered as a combination of DEM accuracy and DEM suitability so; this paper is aimed to analyse the quality of the DEM from non-metric digital camera on UAS compared with a DEM from LIDAR corresponding to same geographic space covering 4 km2 in Artemisa province, Cuba. This area is in a frame of urban planning whose need to know the topographic characteristics in order to analyse hydrology behaviour and decide the best place for make roads, building and so on. Base on LIDAR technology is still more accurate method, it offer us a pattern for test DEM from non-metric digital camera on UAS, whose are much more flexible and bring a solution for many applications whose needs DEM of detail.

  9. Performance assessment in brain-computer interface-based augmentative and alternative communication

    PubMed Central

    2013-01-01

    A large number of incommensurable metrics are currently used to report the performance of brain-computer interfaces (BCI) used for augmentative and alterative communication (AAC). The lack of standard metrics precludes the comparison of different BCI-based AAC systems, hindering rapid growth and development of this technology. This paper presents a review of the metrics that have been used to report performance of BCIs used for AAC from January 2005 to January 2012. We distinguish between Level 1 metrics used to report performance at the output of the BCI Control Module, which translates brain signals into logical control output, and Level 2 metrics at the Selection Enhancement Module, which translates logical control to semantic control. We recommend that: (1) the commensurate metrics Mutual Information or Information Transfer Rate (ITR) be used to report Level 1 BCI performance, as these metrics represent information throughput, which is of interest in BCIs for AAC; 2) the BCI-Utility metric be used to report Level 2 BCI performance, as it is capable of handling all current methods of improving BCI performance; (3) these metrics should be supplemented by information specific to each unique BCI configuration; and (4) studies involving Selection Enhancement Modules should report performance at both Level 1 and Level 2 in the BCI system. Following these recommendations will enable efficient comparison between both BCI Control and Selection Enhancement Modules, accelerating research and development of BCI-based AAC systems. PMID:23680020

  10. The State of Simulations: Soft-Skill Simulations Emerge as a Powerful New Form of E-Learning.

    ERIC Educational Resources Information Center

    Aldrich, Clark

    2001-01-01

    Presents responses of leaders from six simulation companies about challenges and opportunities of soft-skills simulations in e-learning. Discussion includes: evaluation metrics; role of subject matter experts in developing simulations; video versus computer graphics; technology needed to run simulations; technology breakthroughs; pricing;…

  11. NASA Aviation Safety Program Systems Analysis/Program Assessment Metrics Review

    NASA Technical Reports Server (NTRS)

    Louis, Garrick E.; Anderson, Katherine; Ahmad, Tisan; Bouabid, Ali; Siriwardana, Maya; Guilbaud, Patrick

    2003-01-01

    The goal of this project is to evaluate the metrics and processes used by NASA's Aviation Safety Program in assessing technologies that contribute to NASA's aviation safety goals. There were three objectives for reaching this goal. First, NASA's main objectives for aviation safety were documented and their consistency was checked against the main objectives of the Aviation Safety Program. Next, the metrics used for technology investment by the Program Assessment function of AvSP were evaluated. Finally, other metrics that could be used by the Program Assessment Team (PAT) were identified and evaluated. This investigation revealed that the objectives are in fact consistent across organizational levels at NASA and with the FAA. Some of the major issues discussed in this study which should be further investigated, are the removal of the Cost and Return-on-Investment metrics, the lack of the metrics to measure the balance of investment and technology, the interdependencies between some of the metric risk driver categories, and the conflict between 'fatal accident rate' and 'accident rate' in the language of the Aviation Safety goal as stated in different sources.

  12. Human Performance Optimization Metrics: Consensus Findings, Gaps, and Recommendations for Future Research.

    PubMed

    Nindl, Bradley C; Jaffin, Dianna P; Dretsch, Michael N; Cheuvront, Samuel N; Wesensten, Nancy J; Kent, Michael L; Grunberg, Neil E; Pierce, Joseph R; Barry, Erin S; Scott, Jonathan M; Young, Andrew J; OʼConnor, Francis G; Deuster, Patricia A

    2015-11-01

    Human performance optimization (HPO) is defined as "the process of applying knowledge, skills and emerging technologies to improve and preserve the capabilities of military members, and organizations to execute essential tasks." The lack of consensus for operationally relevant and standardized metrics that meet joint military requirements has been identified as the single most important gap for research and application of HPO. In 2013, the Consortium for Health and Military Performance hosted a meeting to develop a toolkit of standardized HPO metrics for use in military and civilian research, and potentially for field applications by commanders, units, and organizations. Performance was considered from a holistic perspective as being influenced by various behaviors and barriers. To accomplish the goal of developing a standardized toolkit, key metrics were identified and evaluated across a spectrum of domains that contribute to HPO: physical performance, nutritional status, psychological status, cognitive performance, environmental challenges, sleep, and pain. These domains were chosen based on relevant data with regard to performance enhancers and degraders. The specific objectives at this meeting were to (a) identify and evaluate current metrics for assessing human performance within selected domains; (b) prioritize metrics within each domain to establish a human performance assessment toolkit; and (c) identify scientific gaps and the needed research to more effectively assess human performance across domains. This article provides of a summary of 150 total HPO metrics across multiple domains that can be used as a starting point-the beginning of an HPO toolkit: physical fitness (29 metrics), nutrition (24 metrics), psychological status (36 metrics), cognitive performance (35 metrics), environment (12 metrics), sleep (9 metrics), and pain (5 metrics). These metrics can be particularly valuable as the military emphasizes a renewed interest in Human Dimension efforts, and leverages science, resources, programs, and policies to optimize the performance capacities of all Service members.

  13. Landscape metrics for assessment of landscape destruction and rehabilitation.

    PubMed

    Herzog, F; Lausch, A; Müller, E; Thulke, H H; Steinhardt, U; Lehmann, S

    2001-01-01

    This investigation tested the usefulness of geometry-based landscape metrics for monitoring landscapes in a heavily disturbed environment. Research was carried out in a 75 sq km study area in Saxony, eastern Germany, where the landscape has been affected by surface mining and agricultural intensification. Landscape metrics were calculated from digital maps (1912, 1944, 1973, 1989) for the entire study area and for subregions (river valleys, plains), which were defined using the original geology and topography of the region. Correlation and factor analyses were used to select a set of landscape metrics suitable for landscape monitoring. Little land-use change occurred in the first half of the century, but political decisions and technological developments led to considerable change later. Metrics showed a similar pattern with almost no change between 1912 and 1944, but dramatic changes after 1944. Nonparametric statistical methods were used to test whether metrics differed between river valleys and plains. Significant differences in the metrics for these regions were found in the early maps (1912, 1944), but these differences were not significant in 1973 or 1989. These findings indicate that anthropogenic influences created a more home geneous landscape.

  14. A Conceptual Methodology for Assessing Acquisition Requirements Robustness against Technology Uncertainties

    NASA Astrophysics Data System (ADS)

    Chou, Shuo-Ju

    2011-12-01

    In recent years the United States has shifted from a threat-based acquisition policy that developed systems for countering specific threats to a capabilities-based strategy that emphasizes the acquisition of systems that provide critical national defense capabilities. This shift in policy, in theory, allows for the creation of an "optimal force" that is robust against current and future threats regardless of the tactics and scenario involved. In broad terms, robustness can be defined as the insensitivity of an outcome to "noise" or non-controlled variables. Within this context, the outcome is the successful achievement of defense strategies and the noise variables are tactics and scenarios that will be associated with current and future enemies. Unfortunately, a lack of system capability, budget, and schedule robustness against technology performance and development uncertainties has led to major setbacks in recent acquisition programs. This lack of robustness stems from the fact that immature technologies have uncertainties in their expected performance, development cost, and schedule that cause to variations in system effectiveness and program development budget and schedule requirements. Unfortunately, the Technology Readiness Assessment process currently used by acquisition program managers and decision-makers to measure technology uncertainty during critical program decision junctions does not adequately capture the impact of technology performance and development uncertainty on program capability and development metrics. The Technology Readiness Level metric employed by the TRA to describe program technology elements uncertainties can only provide a qualitative and non-descript estimation of the technology uncertainties. In order to assess program robustness, specifically requirements robustness, against technology performance and development uncertainties, a new process is needed. This process should provide acquisition program managers and decision-makers with the ability to assess or measure the robustness of program requirements against such uncertainties. A literature review of techniques for forecasting technology performance and development uncertainties and subsequent impacts on capability, budget, and schedule requirements resulted in the conclusion that an analysis process that coupled a probabilistic analysis technique such as Monte Carlo Simulations with quantitative and parametric models of technology performance impact and technology development time and cost requirements would allow the probabilities of meeting specific constraints of these requirements to be established. These probabilities of requirements success metrics can then be used as a quantitative and probabilistic measure of program requirements robustness against technology uncertainties. Combined with a Multi-Objective Genetic Algorithm optimization process and computer-based Decision Support System, critical information regarding requirements robustness against technology uncertainties can be captured and quantified for acquisition decision-makers. This results in a more informed and justifiable selection of program technologies during initial program definition as well as formulation of program development and risk management strategies. To meet the stated research objective, the ENhanced TEchnology Robustness Prediction and RISk Evaluation (ENTERPRISE) methodology was formulated to provide a structured and transparent process for integrating these enabling techniques to provide a probabilistic and quantitative assessment of acquisition program requirements robustness against technology performance and development uncertainties. In order to demonstrate the capabilities of the ENTERPRISE method and test the research Hypotheses, an demonstration application of this method was performed on a notional program for acquiring the Carrier-based Suppression of Enemy Air Defenses (SEAD) using Unmanned Combat Aircraft Systems (UCAS) and their enabling technologies. The results of this implementation provided valuable insights regarding the benefits and inner workings of this methodology as well as its limitations that should be addressed in the future to narrow the gap between current state and the desired state.

  15. A methodology to enable rapid evaluation of aviation environmental impacts and aircraft technologies

    NASA Astrophysics Data System (ADS)

    Becker, Keith Frederick

    Commercial aviation has become an integral part of modern society and enables unprecedented global connectivity by increasing rapid business, cultural, and personal connectivity. In the decades following World War II, passenger travel through commercial aviation quickly grew at a rate of roughly 8% per year globally. The FAA's most recent Terminal Area Forecast predicts growth to continue at a rate of 2.5% domestically, and the market outlooks produced by Airbus and Boeing generally predict growth to continue at a rate of 5% per year globally over the next several decades, which translates into a need for up to 30,000 new aircraft produced by 2025. With such large numbers of new aircraft potentially entering service, any negative consequences of commercial aviation must undergo examination and mitigation by governing bodies so that growth may still be achieved. Options to simultaneously grow while reducing environmental impact include evolution of the commercial fleet through changes in operations, aircraft mix, and technology adoption. Methods to rapidly evaluate fleet environmental metrics are needed to enable decision makers to quickly compare the impact of different scenarios and weigh the impact of multiple policy options. As the fleet evolves, interdependencies may emerge in the form of tradeoffs between improvements in different environmental metrics as new technologies are brought into service. In order to include the impacts of these interdependencies on fleet evolution, physics-based modeling is required at the appropriate level of fidelity. Evaluation of environmental metrics in a physics-based manner can be done at the individual aircraft level, but will then not capture aggregate fleet metrics. Contrastingly, evaluation of environmental metrics at the fleet level is already being done for aircraft in the commercial fleet, but current tools and approaches require enhancement because they currently capture technology implementation through post-processing, which does not capture physical interdependencies that may arise at the aircraft-level. The goal of the work that has been conducted here was the development of a methodology to develop surrogate fleet approaches that leverage the capability of physics-based aircraft models and the development of connectivity to fleet-level analysis tools to enable rapid evaluation of fuel burn and emissions metrics. Instead of requiring development of an individual physics-based model for each vehicle in the fleet, the surrogate fleet approaches seek to reduce the number of such models needed while still accurately capturing performance of the fleet. By reducing the number of models, both development time and execution time to generate fleet-level results may also be reduced. The initial steps leading to surrogate fleet formulation were a characterization of the commercial fleet into groups based on capability followed by the selection of a reference vehicle model and a reference set of operations for each group. Next, three potential surrogate fleet approaches were formulated. These approaches include the parametric correction factor approach, in which the results of a reference vehicle model are corrected to match the aggregate results of each group; the average replacement approach, in which a new vehicle model is developed to generate aggregate results of each group, and the best-in-class replacement approach, in which results for a reference vehicle are simply substituted for the entire group. Once candidate surrogate fleet approaches were developed, they were each applied to and evaluated over the set of reference operations. Then each approach was evaluated for their ability to model variations in operations. Finally, the ability of each surrogate fleet approach to capture implementation of different technology suites along with corresponding interdependencies between fuel burn and emissions was evaluated using the concept of a virtual fleet to simulate the technology response of multiple aircraft families. The results of experimentation led to a down selection to the best approach to use to rapidly characterize the performance of the commercial fleet for accurately in the context of acceptability of current fleet evaluation methods. The parametric correction factor and average replacement approaches were shown to be successful in capturing reference fleet results as well as fleet performance with variations in operations. The best-in-class replacement approach was shown to be unacceptable as a model for the larger fleet in each of the scenarios tested. Finally, the average replacement approach was the only one that was successful in capturing the impact of technologies on a larger fleet. These results are meaningful because they show that it is possible to calculate the fuel burn and emissions of a larger fleet with a reduced number of physics-based models within acceptable bounds of accuracy. At the same time, the physics-based modeling also provides the ability to evaluate the impact of technologies on fleet-level fuel burn and emissions metrics. The value of such a capability is that multiple future fleet scenarios involving changes in both aircraft operations and technology levels may now be rapidly evaluated to inform and equip policy makers of the implications of impacts of changes on fleet-level metrics.

  16. Technology Licensing for the Benefit of the Developing World: UC Berkeley's Socially Responsible Licensing Program

    ERIC Educational Resources Information Center

    Mimura, Carol

    2007-01-01

    In the years since the passage of the Bayh-Dole Act of 1980, university technology transfer success has been measured primarily by traditional metrics such as numbers of patents filed, revenue obtained from licensed patents and numbers of start-up companies founded to commercialize university intellectual property. Intellectual property (IP)…

  17. Three-dimensional evaluation of postural stability in Parkinson's disease with mobile technology.

    PubMed

    Ozinga, Sarah J; Koop, Mandy Miller; Linder, Susan M; Machado, Andre G; Dey, Tanujit; Alberts, Jay L

    2017-01-01

    Postural instability is a hallmark of Parkinson's disease. Objective metrics to characterize postural stability are necessary for the development of treatment algorithms to aid in the clinical setting. The aim of this project was to validate a mobile device platform and resultant three-dimensional balance metric that characterizes postural stability. A mobile Application was developed, in which biomechanical data from inertial sensors within a mobile device were processed to characterize movement of center of mass in the medial-lateral, anterior-posterior and trunk rotation directions. Twenty-seven individuals with Parkinson's disease and 27 age-matched controls completed various balance tasks. A postural stability metric quantifying the amplitude (peak-to-peak) of sway acceleration in each movement direction was compared between groups. The peak-to-peak value in each direction for each individual with Parkinson's disease across all trials was expressed as a normalized value of the control data to identify individuals with severe postural instability, termed Cleveland Clinic-Postural Stability Index. In all conditions, the balance metric for peak-to-peak was significantly greater in Parkinson's disease compared to controls (p < 0.01 for all tests). The balance metric, in conjunction with mobile device sensors, provides a rapid and systematic metric for quantifying postural stability in Parkinson's disease.

  18. 15 CFR 273.3 - General policy.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... the fiscal year 1992, use the metric system of measurement in its procurements, grants, and other... the use of the metric system in their procurements, grants and other business-related activities... STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE METRIC CONVERSION POLICY FOR FEDERAL AGENCIES METRIC...

  19. Engineering and Technology in Wheelchair Sport.

    PubMed

    Cooper, Rory A; Tuakli-Wosornu, Yetsa A; Henderson, Geoffrey V; Quinby, Eleanor; Dicianno, Brad E; Tsang, Kalai; Ding, Dan; Cooper, Rosemarie; Crytzer, Theresa M; Koontz, Alicia M; Rice, Ian; Bleakney, Adam W

    2018-05-01

    Technologies capable of projecting injury and performance metrics to athletes and coaches are being developed. Wheelchair athletes must be cognizant of their upper limb health; therefore, systems must be designed to promote efficient transfer of energy to the handrims and evaluated for simultaneous effects on the upper limbs. This article is brief review of resources that help wheelchair users increase physiologic response to exercise, develop ideas for adaptive workout routines, locate accessible facilities and outdoor areas, and develop wheelchair sports-specific skills. Published by Elsevier Inc.

  20. Research and development on performance models of thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Ji-hui; Jin, Wei-qi; Wang, Xia; Cheng, Yi-nan

    2009-07-01

    Traditional ACQUIRE models perform the discrimination tasks of detection (target orientation, recognition and identification) for military target based upon minimum resolvable temperature difference (MRTD) and Johnson criteria for thermal imaging systems (TIS). Johnson criteria is generally pessimistic for performance predict of sampled imager with the development of focal plane array (FPA) detectors and digital image process technology. Triangle orientation discrimination threshold (TOD) model, minimum temperature difference perceived (MTDP)/ thermal range model (TRM3) Model and target task performance (TTP) metric have been developed to predict the performance of sampled imager, especially TTP metric can provides better accuracy than the Johnson criteria. In this paper, the performance models above are described; channel width metrics have been presented to describe the synthesis performance including modulate translate function (MTF) channel width for high signal noise to ration (SNR) optoelectronic imaging systems and MRTD channel width for low SNR TIS; the under resolvable questions for performance assessment of TIS are indicated; last, the development direction of performance models for TIS are discussed.

  1. Development of Supersonic Retro-Propulsion for Future Mars Entry, Descent, and Landing Systems

    NASA Technical Reports Server (NTRS)

    Edquist, Karl T.; Dyakonov, Artem A.; Shidner, Jeremy D.; Studak, Joseph W.; Tiggers, Michael A.; Kipp, Devin M.; Prakash, Ravi; Trumble, Kerry A.; Dupzyk, Ian C.; Korzun, Ashley M.

    2010-01-01

    Recent studies have concluded that Viking-era entry system technologies are reaching their practical limits and must be succeeded by new methods capable of delivering large payloads (greater than 10 metric tons) required for human exploration of Mars. One such technology, termed Supersonic Retro-Propulsion, has been proposed as an enabling deceleration technique. However, in order to be considered for future NASA flight projects, this technology will require significant maturation beyond its current state. This paper proposes a roadmap for advancing the component technologies to a point where Supersonic Retro-Propulsion can be reliably used on future Mars missions to land much larger payloads than are currently possible using Viking-based systems. The development roadmap includes technology gates that are achieved through testing and/or analysis, culminating with subscale flight tests in Earth atmosphere that demonstrate stable and controlled flight. The component technologies requiring advancement include large engines capable of throttling, computational models for entry vehicle aerodynamic/propulsive force and moment interactions, aerothermodynamic environments modeling, entry vehicle stability and control methods, integrated systems engineering and analyses, and high-fidelity six degree-of-freedom trajectory simulations. Quantifiable metrics are also proposed as a means to gage the technical progress of Supersonic Retro-Propulsion. Finally, an aggressive schedule is proposed for advancing the technology through sub-scale flight tests at Earth by 2016.

  2. Considerations for Solar Energy Technologies to Make Progress Towards Grid Price Parity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodhouse, Michael; Fu, Ran; Chung, Donald

    2015-11-07

    In this seminar the component costs for solar photovoltaics module and system prices will be highlighted. As a basis for comparison to other renewable and traditional energy options, the metric of focus will be total lifecycle cost-of-energy (LCOE). Several innovations to traditional photovoltaics technologies (including crystalline silicon, CdTe, and CIGS) and developing technologies (including organics and perovskites) that may close the gaps in LCOE will be discussed.

  3. On Applying the Prognostic Performance Metrics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.

  4. "Can you see me now?" An objective metric for predicting intelligibility of compressed American Sign Language video

    NASA Astrophysics Data System (ADS)

    Ciaramello, Francis M.; Hemami, Sheila S.

    2007-02-01

    For members of the Deaf Community in the United States, current communication tools include TTY/TTD services, video relay services, and text-based communication. With the growth of cellular technology, mobile sign language conversations are becoming a possibility. Proper coding techniques must be employed to compress American Sign Language (ASL) video for low-rate transmission while maintaining the quality of the conversation. In order to evaluate these techniques, an appropriate quality metric is needed. This paper demonstrates that traditional video quality metrics, such as PSNR, fail to predict subjective intelligibility scores. By considering the unique structure of ASL video, an appropriate objective metric is developed. Face and hand segmentation is performed using skin-color detection techniques. The distortions in the face and hand regions are optimally weighted and pooled across all frames to create an objective intelligibility score for a distorted sequence. The objective intelligibility metric performs significantly better than PSNR in terms of correlation with subjective responses.

  5. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies: The Evidence and the Framework.

    PubMed

    Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora

    2017-12-01

    Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to ( a ) catalog feasibility measures/metrics and ( b ) propose a framework. For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization.

  6. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies

    PubMed Central

    Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora

    2017-01-01

    Objective Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to (a) catalog feasibility measures/metrics and (b) propose a framework. Methods For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. Findings We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Conclusions Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization. PMID:29333105

  7. Using System Mass (SM), Equivalent Mass (EM), Equivalent System Mass (ESM) or Life Cycle Mass (LCM) in Advanced Life Support (ALS) Reporting

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2003-01-01

    The Advanced Life Support (ALS) has used a single number, Equivalent System Mass (ESM), for both reporting progress and technology selection. ESM is the launch mass required to provide a space system. ESM indicates launch cost. ESM alone is inadequate for technology selection, which should include other metrics such as Technology Readiness Level (TRL) and Life Cycle Cost (LCC) and also consider perfom.arxe 2nd risk. ESM has proven difficult to implement as a reporting metric, partly because it includes non-mass technology selection factors. Since it will not be used exclusively for technology selection, a new reporting metric can be made easier to compute and explain. Systems design trades-off performance, cost, and risk, but a risk weighted cost/benefit metric would be too complex to report. Since life support has fixed requirements, different systems usually have roughly equal performance. Risk is important since failure can harm the crew, but it is difficult to treat simply. Cost is not easy to estimate, but preliminary space system cost estimates are usually based on mass, which is better estimated than cost. Amass-based cost estimate, similar to ESM, would be a good single reporting metric. The paper defines and compares four mass-based cost estimates, Equivalent Mass (EM), Equivalent System Mass (ESM), Life Cycle Mass (LCM), and System Mass (SM). EM is traditional in life support and includes mass, volume, power, cooling and logistics. ESM is the specifically defined ALS metric, which adds crew time and possibly other cost factors to EM. LCM is a new metric, a mass-based estimate of LCC measured in mass units. SM includes only the factors of EM that are originally measured in mass, the hardware and logistics mass. All four mass-based metrics usually give similar comparisons. SM is by far the simplest to compute and easiest to explain.

  8. Risk Factor Assessment Branch (RFAB)

    Cancer.gov

    The Risk Factor Assessment Branch (RFAB) focuses on the development, evaluation, and dissemination of high-quality risk factor metrics, methods, tools, technologies, and resources for use across the cancer research continuum, and the assessment of cancer-related risk factors in the population.

  9. Smart Grid Status and Metrics Report Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balducci, Patrick J.; Antonopoulos, Chrissi A.; Clements, Samuel L.

    A smart grid uses digital power control and communication technology to improve the reliability, security, flexibility, and efficiency of the electric system, from large generation through the delivery systems to electricity consumers and a growing number of distributed generation and storage resources. To convey progress made in achieving the vision of a smart grid, this report uses a set of six characteristics derived from the National Energy Technology Laboratory Modern Grid Strategy. The Smart Grid Status and Metrics Report defines and examines 21 metrics that collectively provide insight into the grid’s capacity to embody these characteristics. This appendix presents papersmore » covering each of the 21 metrics identified in Section 2.1 of the Smart Grid Status and Metrics Report. These metric papers were prepared in advance of the main body of the report and collectively form its informational backbone.« less

  10. Security Metrics: A Solution in Search of a Problem

    ERIC Educational Resources Information Center

    Rosenblatt, Joel

    2008-01-01

    Computer security is one of the most complicated and challenging fields in technology today. A security metrics program provides a major benefit: looking at the metrics on a regular basis offers early clues to changes in attack patterns or environmental factors that may require changes in security strategy. The term "security metrics"…

  11. MO-AB-BRA-05: [18F]NaF PET/CT Imaging Biomarkers in Metastatic Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harmon, S; Perk, T; Lin, C

    Purpose: Clinical use of {sup 18}F-Sodium Fluoride (NaF) PET/CT in metastatic settings often lacks technology to quantitatively measure full disease dynamics due to high tumor burden. This study assesses radiomics-based extraction of NaF PET/CT measures, including global metrics of overall burden and local metrics of disease heterogeneity, in metastatic prostate cancer for correlation to clinical outcomes. Methods: Fifty-six metastatic Castrate-Resistant Prostate Cancer (mCRPC) patients had NaF PET/CT scans performed at baseline and three cycles into chemotherapy (N=16) or androgen-receptor (AR) inhibitors (N=39). A novel technology, Quantitative Total Bone Imaging (QTBI), was used for analysis. Employing hybrid PET/CT segmentation and articulatedmore » skeletal-registration, QTBI allows for response assessment of individual lesions. Various SUV metrics were extracted from each lesion (iSUV). Global metrics were extracted from composite lesion-level statistics for each patient (pSUV). Proportion of detected lesions and those with significant response (%-increase or %-decrease) was calculated for each patient based on test-retest limits for iSUV metrics. Cox proportional hazard regression analyses were conducted between imaging metrics and progression-free survival (PFS). Results: Functional burden (pSUV{sub total}) assessed mid-treatment was the strongest univariate predictor of PFS (HR=2.03; p<0.0001). Various global metrics outperformed baseline clinical markers, including fraction of skeletal burden, mean uptake (pSUV{sub mean}), and heterogeneity of average lesion uptake (pSUV{sub hetero}). Of 43 patients with paired baseline/mid-treatment imaging, 40 showed heterogeneity in lesion-level response, containing populations of lesions with both increasing/decreasing metrics. Proportion of lesions with significantly increasing iSUV{sub mean} was highly predictive of clinical PFS (HR=2.0; p=0.0002). Patients exhibiting higher proportion of lesions with decreasing iSUV{sub total} saw prolonged radiographic PFS (HR=0.51; p=0.02). Conclusion: Technology presented here provides comprehensive disease quantification on NaF PET/CT imaging, showing strong correlation to clinical outcomes. Total functional burden as well as proportions of similarly responding lesions was predictive of PFS. This supports ongoing development of NaF PET/CT based imaging biomarkers in mCRPC. Prostate Cancer Foundation.« less

  12. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  13. Evaluation of Investments in Science, Technology and Innovation: Applying Scientific and Technical Human Capital Framework for Assessment of Doctoral Students in Cooperative Research Centers

    ERIC Educational Resources Information Center

    Leonchuk, Olena

    2016-01-01

    This dissertation builds on an alternative framework for evaluation of science, technology and innovation (STI) outcomes--the scientific & technical (S&T) human capital which was developed by Bozeman, Dietz and Gaughan (2001). At its core, this framework looks beyond simple economic and publication metrics and instead focuses on…

  14. Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.

    PubMed

    Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel

    2017-07-28

    New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.

  15. Use of diagnostic accuracy as a metric for evaluating laboratory proficiency with microarray assays using mixed-tissue RNA reference samples.

    PubMed

    Pine, P S; Boedigheimer, M; Rosenzweig, B A; Turpaz, Y; He, Y D; Delenstarr, G; Ganter, B; Jarnagin, K; Jones, W D; Reid, L H; Thompson, K L

    2008-11-01

    Effective use of microarray technology in clinical and regulatory settings is contingent on the adoption of standard methods for assessing performance. The MicroArray Quality Control project evaluated the repeatability and comparability of microarray data on the major commercial platforms and laid the groundwork for the application of microarray technology to regulatory assessments. However, methods for assessing performance that are commonly applied to diagnostic assays used in laboratory medicine remain to be developed for microarray assays. A reference system for microarray performance evaluation and process improvement was developed that includes reference samples, metrics and reference datasets. The reference material is composed of two mixes of four different rat tissue RNAs that allow defined target ratios to be assayed using a set of tissue-selective analytes that are distributed along the dynamic range of measurement. The diagnostic accuracy of detected changes in expression ratios, measured as the area under the curve from receiver operating characteristic plots, provides a single commutable value for comparing assay specificity and sensitivity. The utility of this system for assessing overall performance was evaluated for relevant applications like multi-laboratory proficiency testing programs and single-laboratory process drift monitoring. The diagnostic accuracy of detection of a 1.5-fold change in signal level was found to be a sensitive metric for comparing overall performance. This test approaches the technical limit for reliable discrimination of differences between two samples using this technology. We describe a reference system that provides a mechanism for internal and external assessment of laboratory proficiency with microarray technology and is translatable to performance assessments on other whole-genome expression arrays used for basic and clinical research.

  16. Fast Response Shape Memory Effect Titanium Nickel (TiNi) Foam Torque Tubes

    NASA Technical Reports Server (NTRS)

    Jardine, Peter

    2014-01-01

    Shape Change Technologies has developed a process to manufacture net-shaped TiNi foam torque tubes that demonstrate the shape memory effect. The torque tubes dramatically reduce response time by a factor of 10. This Phase II project matured the actuator technology by rigorously characterizing the process to optimize the quality of the TiNi and developing a set of metrics to provide ISO 9002 quality assurance. A laboratory virtual instrument engineering workbench (LabVIEW'TM')-based, real-time control of the torsional actuators was developed. These actuators were developed with The Boeing Company for aerospace applications.

  17. Microwave-Assisted Ignition for Improved Internal Combustion Engine Efficiency

    NASA Astrophysics Data System (ADS)

    DeFilippo, Anthony Cesar

    The ever-present need for reducing greenhouse gas emissions associated with transportation motivates this investigation of a novel ignition technology for internal combustion engine applications. Advanced engines can achieve higher efficiencies and reduced emissions by operating in regimes with diluted fuel-air mixtures and higher compression ratios, but the range of stable engine operation is constrained by combustion initiation and flame propagation when dilution levels are high. An advanced ignition technology that reliably extends the operating range of internal combustion engines will aid practical implementation of the next generation of high-efficiency engines. This dissertation contributes to next-generation ignition technology advancement by experimentally analyzing a prototype technology as well as developing a numerical model for the chemical processes governing microwave-assisted ignition. The microwave-assisted spark plug under development by Imagineering, Inc. of Japan has previously been shown to expand the stable operating range of gasoline-fueled engines through plasma-assisted combustion, but the factors limiting its operation were not well characterized. The present experimental study has two main goals. The first goal is to investigate the capability of the microwave-assisted spark plug towards expanding the stable operating range of wet-ethanol-fueled engines. The stability range is investigated by examining the coefficient of variation of indicated mean effective pressure as a metric for instability, and indicated specific ethanol consumption as a metric for efficiency. The second goal is to examine the factors affecting the extent to which microwaves enhance ignition processes. The factors impacting microwave enhancement of ignition processes are individually examined, using flame development behavior as a key metric in determining microwave effectiveness. Further development of practical combustion applications implementing microwave-assisted spark technology will benefit from predictive models which include the plasma processes governing the observed combustion enhancement. This dissertation documents the development of a chemical kinetic mechanism for the plasma-assisted combustion processes relevant to microwave-assisted spark ignition. The mechanism includes an existing mechanism for gas-phase methane oxidation, supplemented with electron impact reactions, cation and anion chemical reactions, and reactions involving vibrationally-excited and electronically-excited species. Calculations using the presently-developed numerical model explain experimentally-observed trends, highlighting the relative importance of pressure, temperature, and mixture composition in determining the effectiveness of microwave-assisted ignition enhancement.

  18. Determination of selection criteria for spray drift reduction from atomization data

    USDA-ARS?s Scientific Manuscript database

    When testing and evaluating drift reduction technologies (DRT), there are different metrics that can be used to determine if the technology reduces drift as compared to a reference system. These metrics can include reduction in percent of fine drops, measured spray drift from a field trial, or comp...

  19. Engineering technology for networks

    NASA Technical Reports Server (NTRS)

    Paul, Arthur S.; Benjamin, Norman

    1991-01-01

    Space Network (SN) modeling and evaluation are presented. The following tasks are included: Network Modeling (developing measures and metrics for SN, modeling of the Network Control Center (NCC), using knowledge acquired from the NCC to model the SNC, and modeling the SN); and Space Network Resource scheduling.

  20. Evaluation of structure from motion for soil microtopography measurement

    USDA-ARS?s Scientific Manuscript database

    Recent developments in low cost structure from motion (SFM) technologies offer new opportunities for geoscientists to acquire high resolution soil microtopography data at a fraction of the cost of conventional techniques. However, these new methodologies often lack easily accessible error metrics an...

  1. Research on quality metrics of wireless adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Li, Xuefei

    2018-04-01

    With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.

  2. A software technology evaluation program

    NASA Technical Reports Server (NTRS)

    Novaes-Card, David N.

    1985-01-01

    A set of quantitative approaches is presented for evaluating software development methods and tools. The basic idea is to generate a set of goals which are refined into quantifiable questions which specify metrics to be collected on the software development and maintenance process and product. These metrics can be used to characterize, evaluate, predict, and motivate. They can be used in an active as well as passive way by learning form analyzing the data and improving the methods and tools based upon what is learned from that analysis. Several examples were given representing each of the different approaches to evaluation. The cost of the approaches varied inversely with the level of confidence in the interpretation of the results.

  3. 15 CFR 1170.5 - Recommendations for agency organization.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... to use of the metric system. (b) Designate a senior policy official to be responsible for agency... the metric system. (e) Provide for internal guidelines, training and documentation to assure employee... Trade (Continued) TECHNOLOGY ADMINISTRATION, DEPARTMENT OF COMMERCE METRIC CONVERSION POLICY FOR FEDERAL...

  4. 15 CFR 1170.5 - Recommendations for agency organization.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... to use of the metric system. (b) Designate a senior policy official to be responsible for agency... the metric system. (e) Provide for internal guidelines, training and documentation to assure employee... Trade (Continued) TECHNOLOGY ADMINISTRATION, DEPARTMENT OF COMMERCE METRIC CONVERSION POLICY FOR FEDERAL...

  5. Prioritizing material recovery for end-of-life printed circuit boards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Xue, E-mail: xxw6590@rit.edu; Gaustad, Gabrielle, E-mail: gabrielle.gaustad@rit.edu

    2012-10-15

    Highlights: Black-Right-Pointing-Pointer Material recovery driven by composition, choice of ranking, and weighting. Black-Right-Pointing-Pointer Economic potential for new recycling technologies quantified for several metrics. Black-Right-Pointing-Pointer Indicators developed for materials incurring high eco-toxicity costs. Black-Right-Pointing-Pointer Methodology useful for a variety of stakeholders, particularly policy-makers. - Abstract: The increasing growth in generation of electronic waste (e-waste) motivates a variety of waste reduction research. Printed circuit boards (PCBs) are an important sub-set of the overall e-waste stream due to the high value of the materials contained within them and potential toxicity. This work explores several environmental and economic metrics for prioritizing the recovery ofmore » materials from end-of-life PCBs. A weighted sum model is used to investigate the trade-offs among economic value, energy saving potentials, and eco-toxicity. Results show that given equal weights for these three sustainability criteria gold has the highest recovery priority, followed by copper, palladium, aluminum, tin, lead, platinum, nickel, zinc, and silver. However, recovery priority will change significantly due to variation in the composition of PCBs, choice of ranking metrics, and weighting factors when scoring multiple metrics. These results can be used by waste management decision-makers to quantify the value and environmental savings potential for recycling technology development and infrastructure. They can also be extended by policy-makers to inform possible penalties for land-filling PCBs or exporting to the informal recycling sector. The importance of weighting factors when examining recovery trade-offs, particularly for policies regarding PCB collection and recycling are explored further.« less

  6. Impact of artifact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data

    PubMed Central

    Carroll, Thomas S.; Liang, Ziwei; Salama, Rafik; Stark, Rory; de Santiago, Ines

    2014-01-01

    With the advent of ChIP-seq multiplexing technologies and the subsequent increase in ChIP-seq throughput, the development of working standards for the quality assessment of ChIP-seq studies has received significant attention. The ENCODE consortium's large scale analysis of transcription factor binding and epigenetic marks as well as concordant work on ChIP-seq by other laboratories has established a new generation of ChIP-seq quality control measures. The use of these metrics alongside common processing steps has however not been evaluated. In this study, we investigate the effects of blacklisting and removal of duplicated reads on established metrics of ChIP-seq quality and show that the interpretation of these metrics is highly dependent on the ChIP-seq preprocessing steps applied. Further to this we perform the first investigation of the use of these metrics for ChIP-exo data and make recommendations for the adaptation of the NSC statistic to allow for the assessment of ChIP-exo efficiency. PMID:24782889

  7. The voice of the customer--Part 2: Benchmarking battery chargers against the Consumer's Ideal Product.

    PubMed

    Bauer, S M; Lane, J P; Stone, V I; Unnikrishnan, N

    1998-01-01

    The Rehabilitation Engineering Research Center on Technology Evaluation and Transfer is exploring how the end users of assistive technology devices define the ideal device. This work is called the Consumer Ideal Product program. In this work, end users identify and establish the importance of a broad range of product design features, along with the related product support and service provided by manufacturers and vendors. This paper describes a method for systematically transforming end-user defined requirements into a form that is useful and accessible to product designers, manufacturers, and vendors. In particular, product requirements, importance weightings, and metrics are developed from the Consumer Ideal Product battery charger outcomes. Six battery charges are benchmarked against these product requirements using the metrics developed. The results suggest improvements for each product's design, service, and support. Overall, the six chargers meet roughly 45-75% of the ideal product's requirements. Many of the suggested improvements are low-cost changes that, if adopted, could provide companies a competitive advantage in the marketplace.

  8. N+3 Small Commercial Efficient and Quiet Transportation for Year 2030-2035

    NASA Technical Reports Server (NTRS)

    DAngelo, Martin M.; Gallman, John; Johnson, Vicki; Garcia, Elena; Tai, Jimmy; Young, Russell

    2010-01-01

    This study develops a future scenario that enables convenient point-to-point commercial air travel via a large network of community airports and a new class of small airliners. A network demand and capacity study identifies current and future air travel demands and the capacity of this new network to satisfy these demands. A current technology small commercial airliner is defined to meet the needs of the new network, as a baseline for evaluating the improvement brought about by advanced technologies. Impact of this new mode of travel on the infrastructure and surrounding communities of the small airports in this new N+3 network are also evaluated. Year 2030-2035 small commercial airliner technologies are identified and a trade study conducted to evaluate and select those with the greatest potential for enhancing future air travel and the study metrics. The selected advanced air vehicle concept is assessed against the baseline aircraft, and an advanced, but conventional aircraft, and the study metrics. The key technologies of the selected advanced air vehicle are identified, their impact quantified, and risk assessments and roadmaps defined.

  9. Toward equality of biodiversity knowledge through technology transfer.

    PubMed

    Böhm, Monika; Collen, Ben

    2015-10-01

    To help stem the continuing decline of biodiversity, effective transfer of technology from resource-rich to biodiversity-rich countries is required. Biodiversity technology as defined by the Convention on Biological Diversity (CBD) is a complex term, encompassing a wide variety of activities and interest groups. As yet, there is no robust framework by which to monitor the extent to which technology transfer might benefit biodiversity. We devised a definition of biodiversity technology and a framework for the monitoring of technology transfer between CBD signatories. Biodiversity technology within the scope of the CBD encompasses hard and soft technologies that are relevant to the conservation and sustainable use of biodiversity, or make use of genetic resources, and that relate to all aspects of the CBD, with a particular focus on technology transfer from resource-rich to biodiversity-rich countries. Our proposed framework introduces technology transfer as a response indicator: technology transfer is increased to stem pressures on biodiversity. We suggest an initial approach of tracking technology flow between countries; charting this flow is likely to be a one-to-many relationship (i.e., the flow of a specific technology from one country to multiple countries). Future developments should then focus on integrating biodiversity technology transfer into the current pressure-state-response indicator framework favored by the CBD (i.e., measuring the influence of technology transfer on changes in state and pressure variables). Structured national reporting is important to obtaining metrics relevant to technology and knowledge transfer. Interim measures, that can be used to assess biodiversity technology or knowledge status while more in-depth indicators are being developed, include the number of species inventories, threatened species lists, or national red lists; databases on publications and project funding may provide measures of international cooperation. Such a pragmatic approach, followed by rigorous testing of specific technology transfer metrics submitted by CBD signatories in a standardized manner may in turn improve the focus of future targets on technology transfer for biodiversity conservation. © 2015 Society for Conservation Biology.

  10. 76 FR 61714 - Proposed Collection; Comment Request; STAR METRICS (Science and Technology for America's...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-05

    ... publication. Dated: September 27, 2011. Stefano Bertuzzi, Office of the Director, Office of Science Policy Analysis, Office of Science Policy, National Institutes of Health. [FR Doc. 2011-25732 Filed 10-4-11; 8:45... Request; STAR METRICS (Science and Technology for America's Reinvestment: Measuring the EffecTs of...

  11. 15 CFR 273.6 - Reporting requirement.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... implementation plans, with a current timetable for the agency's transition to the metric system, as well as actions planned for the budget year involved to implement fully the metric system, in accordance with this... OF STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE METRIC CONVERSION POLICY FOR FEDERAL AGENCIES...

  12. The Stretched Lens Array SquareRigger (SLASR) for Space Power

    NASA Technical Reports Server (NTRS)

    Piszczor, Michael F.; O'Neill, Mark J.; Eskenazi, Michael I.; Brandhorst, Henry W.

    2006-01-01

    For the past three years, our team has been developing, refining, and maturing a unique solar array technology known as Stretched Lens Array SquareRigger (SLASR). SLASR offers an unprecedented portfolio of state-of-the-art performance metrics, including areal power density, specific power, stowed power density, high-voltage capability, radiation hardness, modularity, scalability, mass-producibility, and cost-effectiveness. SLASR is particularly well suited to high-power space missions, including solar electric propulsion (SEP) space tugs, major exploration missions to the Moon and Mars, and power-intensive military spacecraft. SLASR is also very well suited to high-radiation missions, since the cell shielding mass penalty is 85% less for the SLASR concentrator array than for one-sun planar arrays. The paper describes SLASR technology and presents significant results of developments to date in a number of key areas, from advances in the key components to full-scale array hardware fabrication and evaluation. A summary of SLASR s unprecedented performance metrics, both near-term and longer term, will be presented. Plans for future SLASR developments and near-term space applications will also be outlined.

  13. Correlating objective and subjective evaluation of texture appearance with applications to camera phone imaging

    NASA Astrophysics Data System (ADS)

    Phillips, Jonathan B.; Coppola, Stephen M.; Jin, Elaine W.; Chen, Ying; Clark, James H.; Mauer, Timothy A.

    2009-01-01

    Texture appearance is an important component of photographic image quality as well as object recognition. Noise cleaning algorithms are used to decrease sensor noise of digital images, but can hinder texture elements in the process. The Camera Phone Image Quality (CPIQ) initiative of the International Imaging Industry Association (I3A) is developing metrics to quantify texture appearance. Objective and subjective experimental results of the texture metric development are presented in this paper. Eight levels of noise cleaning were applied to ten photographic scenes that included texture elements such as faces, landscapes, architecture, and foliage. Four companies (Aptina Imaging, LLC, Hewlett-Packard, Eastman Kodak Company, and Vista Point Technologies) have performed psychophysical evaluations of overall image quality using one of two methods of evaluation. Both methods presented paired comparisons of images on thin film transistor liquid crystal displays (TFT-LCD), but the display pixel pitch and viewing distance differed. CPIQ has also been developing objective texture metrics and targets that were used to analyze the same eight levels of noise cleaning. The correlation of the subjective and objective test results indicates that texture perception can be modeled with an objective metric. The two methods of psychophysical evaluation exhibited high correlation despite the differences in methodology.

  14. Quantifying seascape structure: Extending terrestrial spatial pattern metrics to the marine realm

    USGS Publications Warehouse

    Wedding, L.M.; Christopher, L.A.; Pittman, S.J.; Friedlander, A.M.; Jorgensen, S.

    2011-01-01

    Spatial pattern metrics have routinely been applied to characterize and quantify structural features of terrestrial landscapes and have demonstrated great utility in landscape ecology and conservation planning. The important role of spatial structure in ecology and management is now commonly recognized, and recent advances in marine remote sensing technology have facilitated the application of spatial pattern metrics to the marine environment. However, it is not yet clear whether concepts, metrics, and statistical techniques developed for terrestrial ecosystems are relevant for marine species and seascapes. To address this gap in our knowledge, we reviewed, synthesized, and evaluated the utility and application of spatial pattern metrics in the marine science literature over the past 30 yr (1980 to 2010). In total, 23 studies characterized seascape structure, of which 17 quantified spatial patterns using a 2-dimensional patch-mosaic model and 5 used a continuously varying 3-dimensional surface model. Most seascape studies followed terrestrial-based studies in their search for ecological patterns and applied or modified existing metrics. Only 1 truly unique metric was found (hydrodynamic aperture applied to Pacific atolls). While there are still relatively few studies using spatial pattern metrics in the marine environment, they have suffered from similar misuse as reported for terrestrial studies, such as the lack of a priori considerations or the problem of collinearity between metrics. Spatial pattern metrics offer great potential for ecological research and environmental management in marine systems, and future studies should focus on (1) the dynamic boundary between the land and sea; (2) quantifying 3-dimensional spatial patterns; and (3) assessing and monitoring seascape change. ?? Inter-Research 2011.

  15. Four applications of a software data collection and analysis methodology

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.

    1985-01-01

    The evaluation of software technologies suffers because of the lack of quantitative assessment of their effect on software development and modification. A seven-step data collection and analysis methodology couples software technology evaluation with software measurement. Four in-depth applications of the methodology are presented. The four studies represent each of the general categories of analyses on the software product and development process: blocked subject-project studies, replicated project studies, multi-project variation studies, and single project strategies. The four applications are in the areas of, respectively, software testing, cleanroom software development, characteristic software metric sets, and software error analysis.

  16. Subsystem Details for the Fiscal Year 2004 Advanced Life Support Research and Technology Development Metric

    NASA Technical Reports Server (NTRS)

    Hanford, Anthony J.

    2004-01-01

    This document provides values at the assembly level for the subsystems described in the Fiscal Year 2004 Advanced Life Support Research and Technology Development Metric (Hanford, 2004). Hanford (2004) summarizes the subordinate computational values for the Advanced Life Support Research and Technology Development (ALS R&TD) Metric at the subsystem level, while this manuscript provides a summary at the assembly level. Hanford (2004) lists mass, volume, power, cooling, and crewtime for each mission examined by the ALS R&TD Metric according to the nominal organization for the Advanced Life Support (ALS) elements. The values in the tables below, Table 2.1 through Table 2.8, list the assemblies, using the organization and names within the Advanced Life Support Sizing Analysis Tool (ALSSAT) for each ALS element. These tables specifically detail mass, volume, power, cooling, and crewtime. Additionally, mass and volume are designated in terms of values associated with initial hardware and resupplied hardware just as they are within ALSSAT. The overall subsystem values are listed on the line following each subsystem entry. These values are consistent with those reported in Hanford (2004) for each listed mission. Any deviations between these values and those in Hanford (2004) arise from differences in when individual numerical values are rounded within each report, and therefore the resulting minor differences should not concern even a careful reader. Hanford (2004) u es the uni ts kW(sub e) and kW(sub th) for power and cooling, respectively, while the nomenclature below uses W(sub e) and W(sub th), which is consistent with the native units within ALSSAT. The assemblies, as specified within ALSSAT, are listed in bold below their respective subsystems. When recognizable assembly components are not listed within ALSSAT, a summary of the assembly is provided on the same line as the entry for the assembly. Assemblies with one or more recognizable components are further described by the indented entries below them. See Yeh, et al. (2002), Yeh, et al. (2003), and Yeh, et al. (2004) for details about ALSSAT organization. Except for the dry food mass listed within the Food Processing, Packaging, and Storage within the Food Subsystem, total values for assemblies would be the sum of their components. The Dry Food Mass, however, is that portion of the food system that was neglected during the computation of the Fiscal Year 2004 ALS R&TD Metric. It is listed here to provide a reference, but it is otherwise ignored in the overall totals. See Hanford (2004) for details of this process and supporting rationale. When applicable, the technology label from ALSSAT is listed in the second column, and the associated abbreviations are listed below in Section 4. For more details of the technologies assumed for each mission, please see Hanford (2004) for descriptions of each subsystem and an overall life support system schematic.

  17. Life cycle design metrics for energy generation technologies: Method, data, and case study

    NASA Astrophysics Data System (ADS)

    Cooper, Joyce; Lee, Seung-Jin; Elter, John; Boussu, Jeff; Boman, Sarah

    A method to assist in the rapid preparation of Life Cycle Assessments of emerging energy generation technologies is presented and applied to distributed proton exchange membrane fuel cell systems. The method develops life cycle environmental design metrics and allows variations in hardware materials, transportation scenarios, assembly energy use, operating performance and consumables, and fuels and fuel production scenarios to be modeled and comparisons to competing systems to be made. Data and results are based on publicly available U.S. Life Cycle Assessment data sources and are formulated to allow the environmental impact weighting scheme to be specified. A case study evaluates improvements in efficiency and in materials recycling and compares distributed proton exchange membrane fuel cell systems to other distributed generation options. The results reveal the importance of sensitivity analysis and system efficiency in interpreting case studies.

  18. Preparing for a Metric America

    ERIC Educational Resources Information Center

    Willenbrock, F. Karl

    1975-01-01

    In this speech before the Sixtieth National Conference on Weights and Measures, the Director of the Institute for Applied Technology, National Bureau of Standards, presented his view of the imminent conversion to the metric system, outlined some issues related to metrication, and announced appointments related to the change-over. (SD)

  19. 15 CFR 1170.6 - Reporting requirement.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... transition to the metric system, as well as actions planned for the budget year involved to implement fully the metric system, in accordance with this policy. Reporting shall cease for an agency in the fiscal...) TECHNOLOGY ADMINISTRATION, DEPARTMENT OF COMMERCE METRIC CONVERSION POLICY FOR FEDERAL AGENCIES § 1170.6...

  20. 15 CFR 1170.6 - Reporting requirement.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... transition to the metric system, as well as actions planned for the budget year involved to implement fully the metric system, in accordance with this policy. Reporting shall cease for an agency in the fiscal...) TECHNOLOGY ADMINISTRATION, DEPARTMENT OF COMMERCE METRIC CONVERSION POLICY FOR FEDERAL AGENCIES § 1170.6...

  1. 15 CFR 1170.6 - Reporting requirement.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... transition to the metric system, as well as actions planned for the budget year involved to implement fully the metric system, in accordance with this policy. Reporting shall cease for an agency in the fiscal...) TECHNOLOGY ADMINISTRATION, DEPARTMENT OF COMMERCE METRIC CONVERSION POLICY FOR FEDERAL AGENCIES § 1170.6...

  2. 15 CFR 1170.6 - Reporting requirement.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... transition to the metric system, as well as actions planned for the budget year involved to implement fully the metric system, in accordance with this policy. Reporting shall cease for an agency in the fiscal...) TECHNOLOGY ADMINISTRATION, DEPARTMENT OF COMMERCE METRIC CONVERSION POLICY FOR FEDERAL AGENCIES § 1170.6...

  3. 15 CFR 273.5 - Recommendations for agency organization.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... policy guidance for the U.S. Government's transtion to use of the metric system. (b) Designate a senior... coordinating National transition to the metric system. (e) Provide for internal guidelines, training and... NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE METRIC CONVERSION POLICY FOR FEDERAL...

  4. Risk-Based Prioritization of Research for Aviation Security Using Logic-Evolved Decision Analysis

    NASA Technical Reports Server (NTRS)

    Eisenhawer, S. W.; Bott, T. F.; Sorokach, M. R.; Jones, F. P.; Foggia, J. R.

    2004-01-01

    The National Aeronautics and Space Administration is developing advanced technologies to reduce terrorist risk for the air transportation system. Decision support tools are needed to help allocate assets to the most promising research. An approach to rank ordering technologies (using logic-evolved decision analysis), with risk reduction as the metric, is presented. The development of a spanning set of scenarios using a logic-gate tree is described. Baseline risk for these scenarios is evaluated with an approximate reasoning model. Illustrative risk and risk reduction results are presented.

  5. Development of the silane process for the production of low-cost polysilicon

    NASA Technical Reports Server (NTRS)

    Iya, S. K.

    1986-01-01

    It was recognized that the traditional hot rod type deposition process for decomposing silane is energy intensive, and a different approach for converting silane to silicon was chosen. A 1200 metric tons/year capacity commercial plant was constructed in Moses Lake, Washington. A fluidized bed processor was chosen as the most promising technology and several encouraging test runs were conducted. This technology continues to be very promising in producing low cost polysilicon. The Union Carbide silane process and the research development on the fluidized bed silane decomposition are discussed.

  6. ALSSAT Development Status

    NASA Technical Reports Server (NTRS)

    Yeh, H. Y. Jannivine; Brown, Cheryl B.; Jeng, Frank F.; Anderson, Molly; Ewert, Michael K.

    2009-01-01

    The development of the Advanced Life Support (ALS) Sizing Analysis Tool (ALSSAT) using Microsoft(Registered TradeMark) Excel was initiated by the Crew and Thermal Systems Division (CTSD) of Johnson Space Center (JSC) in 1997 to support the ALS and Exploration Offices in Environmental Control and Life Support System (ECLSS) design and studies. It aids the user in performing detailed sizing of the ECLSS for different combinations of the Exploration Life support (ELS) regenerative system technologies. This analysis tool will assist the user in performing ECLSS preliminary design and trade studies as well as system optimization efficiently and economically. The latest ALSSAT related publication in ICES 2004 detailed ALSSAT s development status including the completion of all six ELS Subsystems (ELSS), namely, the Air Management Subsystem, the Biomass Subsystem, the Food Management Subsystem, the Solid Waste Management Subsystem, the Water Management Subsystem, and the Thermal Control Subsystem and two external interfaces, including the Extravehicular Activity and the Human Accommodations. Since 2004, many more regenerative technologies in the ELSS were implemented into ALSSAT. ALSSAT has also been used for the ELS Research and Technology Development Metric Calculation for FY02 thru FY06. It was also used to conduct the Lunar Outpost Metric calculation for FY08 and was integrated as part of a Habitat Model developed at Langley Research Center to support the Constellation program. This paper will give an update on the analysis tool s current development status as well as present the analytical results of one of the trade studies that was performed.

  7. Results of SEI Independent Research and Development Projects and Report on Emerging Technologies and Technology Trends

    DTIC Science & Technology

    2005-12-01

    W. Collins Peter Feiler John Goodenough Aaron Greenhouse Jorgen Hansson Alan R . Hevner John Hudak Angel Jordan Rick Kazman Richard C. Linger...John Goodenough Aaron Greenhouse Jorgen Hansson Alan R . Hevner John Hudak Angel Jordan Rick Kazman Richard C. Linger Mark G. Pleszkoch Stacy J...operability, sustainability, software R &D, metrics for acquisition, acquisition management, and commercial off-the-shelf products. The IR&D projects conducted

  8. Geothermal Exploration Cost and Time

    DOE Data Explorer

    Jenne, Scott

    2013-02-13

    The Department of Energy’s Geothermal Technology Office (GTO) provides RD&D funding for geothermal exploration technologies with the goal of lowering the risks and costs of geothermal development and exploration. The National Renewable Energy Laboratory (NREL) was tasked with developing a metric in 2012 to measure the impacts of this RD&D funding on the cost and time required for exploration activities. The development of this cost and time metric included collecting cost and time data for exploration techniques, creating a baseline suite of exploration techniques to which future exploration cost and time improvements can be compared, and developing an online tool for graphically showing potential project impacts (all available at http://en.openei.org/wiki/Gateway: Geothermal). This paper describes the methodology used to define the baseline exploration suite of techniques (baseline), as well as the approach that was used to create the cost and time data set that populates the baseline. The resulting product, an online tool for measuring impact, and the aggregated cost and time data are available on the Open Energy Information website (OpenEI, http://en.openei.org) for public access. - Published 01/01/2013 by US National Renewable Energy Laboratory NREL.

  9. Quantifying innovation in surgery.

    PubMed

    Hughes-Hallett, Archie; Mayer, Erik K; Marcus, Hani J; Cundy, Thomas P; Pratt, Philip J; Parston, Greg; Vale, Justin A; Darzi, Ara W

    2014-08-01

    The objectives of this study were to assess the applicability of patents and publications as metrics of surgical technology and innovation; evaluate the historical relationship between patents and publications; develop a methodology that can be used to determine the rate of innovation growth in any given health care technology. The study of health care innovation represents an emerging academic field, yet it is limited by a lack of valid scientific methods for quantitative analysis. This article explores and cross-validates 2 innovation metrics using surgical technology as an exemplar. Electronic patenting databases and the MEDLINE database were searched between 1980 and 2010 for "surgeon" OR "surgical" OR "surgery." Resulting patent codes were grouped into technology clusters. Growth curves were plotted for these technology clusters to establish the rate and characteristics of growth. The initial search retrieved 52,046 patents and 1,801,075 publications. The top performing technology cluster of the last 30 years was minimally invasive surgery. Robotic surgery, surgical staplers, and image guidance were the most emergent technology clusters. When examining the growth curves for these clusters they were found to follow an S-shaped pattern of growth, with the emergent technologies lying on the exponential phases of their respective growth curves. In addition, publication and patent counts were closely correlated in areas of technology expansion. This article demonstrates the utility of publically available patent and publication data to quantify innovations within surgical technology and proposes a novel methodology for assessing and forecasting areas of technological innovation.

  10. Analysis of Alternatives in System Capability Satisficing for Effective Acquisition

    DTIC Science & Technology

    2011-04-30

    Technology. He received his BS in Actuarial Science from Universidad Nacional Autónoma de México, MS in Statistics, and MS and PhD in Industrial and Systems...one can view this metric (ITRLfk(i)) as “subsystem” measurement of this technology integrates within the system. In a mathematical representation...that this new metric should be a function of the different ITRLs of each technology, or in a mathematical representation: SRL_Cfk=f(ITRLfk(1), ITRLfk

  11. Using Multi-Core Systems for Rover Autonomy

    NASA Technical Reports Server (NTRS)

    Clement, Brad; Estlin, Tara; Bornstein, Benjamin; Springer, Paul; Anderson, Robert C.

    2010-01-01

    Task Objectives are: (1) Develop and demonstrate key capabilities for rover long-range science operations using multi-core computing, (a) Adapt three rover technologies to execute on SOA multi-core processor (b) Illustrate performance improvements achieved (c) Demonstrate adapted capabilities with rover hardware, (2) Targeting three high-level autonomy technologies (a) Two for onboard data analysis (b) One for onboard command sequencing/planning, (3) Technologies identified as enabling for future missions, (4)Benefits will be measured along several metrics: (a) Execution time / Power requirements (b) Number of data products processed per unit time (c) Solution quality

  12. Technology Transfer

    NASA Technical Reports Server (NTRS)

    Smith, Nanette R.

    1995-01-01

    The objective of this summer's work was to attempt to enhance Technology Application Group (TAG) ability to measure the outcomes of its efforts to transfer NASA technology. By reviewing existing literature, by explaining the economic principles involved in evaluating the economic impact of technology transfer, and by investigating the LaRC processes our William & Mary team has been able to lead this important discussion. In reviewing the existing literature, we identified many of the metrics that are currently being used in the area of technology transfer. Learning about the LaRC technology transfer processes and the metrics currently used to track the transfer process enabled us to compare other R&D facilities to LaRC. We discuss and diagram impacts of technology transfer in the short run and the long run. Significantly, it serves as the basis for analysis and provides guidance in thinking about what the measurement objectives ought to be. By focusing on the SBIR Program, valuable information regarding the strengths and weaknesses of this LaRC program are to be gained. A survey was developed to ask probing questions regarding SBIR contractors' experience with the program. Specifically we are interested in finding out whether the SBIR Program is accomplishing its mission, if the SBIR companies are providing the needed innovations specified by NASA and to what extent those innovations have led to commercial success. We also developed a survey to ask COTR's, who are NASA employees acting as technical advisors to the SBIR contractors, the same type of questions, evaluating the successes and problems with the SBIR Program as they see it. This survey was developed to be implemented interactively on computer. It is our hope that the statistical and econometric studies that can be done on the data collected from all of these sources will provide insight regarding the direction to take in developing systematic evaluations of programs like the SBIR Program so that they can reach their maximum effectiveness.

  13. Parametric Cost Analysis: A Design Function

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1989-01-01

    Parametric cost analysis uses equations to map measurable system attributes into cost. The measures of the system attributes are called metrics. The equations are called cost estimating relationships (CER's), and are obtained by the analysis of cost and technical metric data of products analogous to those to be estimated. Examples of system metrics include mass, power, failure_rate, mean_time_to_repair, energy _consumed, payload_to_orbit, pointing_accuracy, manufacturing_complexity, number_of_fasteners, and percent_of_electronics_weight. The basic assumption is that a measurable relationship exists between system attributes and the cost of the system. If a function exists, the attributes are cost drivers. Candidates for metrics include system requirement metrics and engineering process metrics. Requirements are constraints on the engineering process. From optimization theory we know that any active constraint generates cost by not permitting full optimization of the objective. Thus, requirements are cost drivers. Engineering processes reflect a projection of the requirements onto the corporate culture, engineering technology, and system technology. Engineering processes are an indirect measure of the requirements and, hence, are cost drivers.

  14. Accuracy optimization with wavelength tunability in overlay imaging technology

    NASA Astrophysics Data System (ADS)

    Lee, Honggoo; Kang, Yoonshik; Han, Sangjoon; Shim, Kyuchan; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, Dongyoung; Oh, Eungryong; Choi, Ahlin; Kim, Youngsik; Marciano, Tal; Klein, Dana; Hajaj, Eitan M.; Aharon, Sharon; Ben-Dov, Guy; Lilach, Saltoun; Serero, Dan; Golotsvan, Anna

    2018-03-01

    As semiconductor manufacturing technology progresses and the dimensions of integrated circuit elements shrink, overlay budget is accordingly being reduced. Overlay budget closely approaches the scale of measurement inaccuracies due to both optical imperfections of the measurement system and the interaction of light with geometrical asymmetries of the measured targets. Measurement inaccuracies can no longer be ignored due to their significant effect on the resulting device yield. In this paper we investigate a new approach for imaging based overlay (IBO) measurements by optimizing accuracy rather than contrast precision, including its effect over the total target performance, using wavelength tunable overlay imaging metrology. We present new accuracy metrics based on theoretical development and present their quality in identifying the measurement accuracy when compared to CD-SEM overlay measurements. The paper presents the theoretical considerations and simulation work, as well as measurement data, for which tunability combined with the new accuracy metrics is shown to improve accuracy performance.

  15. Warp Field Mechanics 101

    NASA Technical Reports Server (NTRS)

    White, Harold

    2011-01-01

    This paper will begin with a short review of the Alcubierre warp drive metric and describes how the phenomenon might work based on the original paper. The canonical form of the metric was developed and published in [6] which provided key insight into the field potential and boost for the field which remedied a critical paradox in the original Alcubierre concept of operations. A modified concept of operations based on the canonical form of the metric that remedies the paradox is presented and discussed. The idea of a warp drive in higher dimensional space-time (manifold) will then be briefly considered by comparing the null-like geodesics of the Alcubierre metric to the Chung-Freese metric to illustrate the mathematical role of hyperspace coordinates. The net effect of using a warp drive technology coupled with conventional propulsion systems on an exploration mission will be discussed using the nomenclature of early mission planning. Finally, an overview of the warp field interferometer test bed being implemented in the Advanced Propulsion Physics Laboratory: Eagleworks (APPL:E) at the Johnson Space Center will be detailed. While warp field mechanics has not had a Chicago Pile moment, the tools necessary to detect a modest instance of the phenomenon are near at hand.

  16. A range/depth modulation transfer function (RMTF) framework for characterizing 3D imaging LADAR performance

    NASA Astrophysics Data System (ADS)

    Staple, Bevan; Earhart, R. P.; Slaymaker, Philip A.; Drouillard, Thomas F., II; Mahony, Thomas

    2005-05-01

    3D imaging LADARs have emerged as the key technology for producing high-resolution imagery of targets in 3-dimensions (X and Y spatial, and Z in the range/depth dimension). Ball Aerospace & Technologies Corp. continues to make significant investments in this technology to enable critical NASA, Department of Defense, and national security missions. As a consequence of rapid technology developments, two issues have emerged that need resolution. First, the terminology used to rate LADAR performance (e.g., range resolution) is inconsistently defined, is improperly used, and thus has become misleading. Second, the terminology does not include a metric of the system"s ability to resolve the 3D depth features of targets. These two issues create confusion when translating customer requirements into hardware. This paper presents a candidate framework for addressing these issues. To address the consistency issue, the framework utilizes only those terminologies proposed and tested by leading LADAR research and standards institutions. We also provide suggestions for strengthening these definitions by linking them to the well-known Rayleigh criterion extended into the range dimension. To address the inadequate 3D image quality metrics, the framework introduces the concept of a Range/Depth Modulation Transfer Function (RMTF). The RMTF measures the impact of the spatial frequencies of a 3D target on its measured modulation in range/depth. It is determined using a new, Range-Based, Slanted Knife-Edge test. We present simulated results for two LADAR pulse detection techniques and compare them to a baseline centroid technique. Consistency in terminology plus a 3D image quality metric enable improved system standardization.

  17. Development of the McGill simulator for endoscopic sinus surgery: a new high-fidelity virtual reality simulator for endoscopic sinus surgery.

    PubMed

    Varshney, Rickul; Frenkiel, Saul; Nguyen, Lily H P; Young, Meredith; Del Maestro, Rolando; Zeitouni, Anthony; Tewfik, Marc A

    2014-01-01

    The technical challenges of endoscopic sinus surgery (ESS) and the high risk of complications support the development of alternative modalities to train residents in these procedures. Virtual reality simulation is becoming a useful tool for training the skills necessary for minimally invasive surgery; however, there are currently no ESS virtual reality simulators available with valid evidence supporting their use in resident education. Our aim was to develop a new rhinology simulator, as well as to define potential performance metrics for trainee assessment. The McGill simulator for endoscopic sinus surgery (MSESS), a new sinus surgery virtual reality simulator with haptic feedback, was developed (a collaboration between the McGill University Department of Otolaryngology-Head and Neck Surgery, the Montreal Neurologic Institute Simulation Lab, and the National Research Council of Canada). A panel of experts in education, performance assessment, rhinology, and skull base surgery convened to identify core technical abilities that would need to be taught by the simulator, as well as performance metrics to be developed and captured. The MSESS allows the user to perform basic sinus surgery skills, such as an ethmoidectomy and sphenoidotomy, through the use of endoscopic tools in a virtual nasal model. The performance metrics were developed by an expert panel and include measurements of safety, quality, and efficiency of the procedure. The MSESS incorporates novel technological advancements to create a realistic platform for trainees. To our knowledge, this is the first simulator to combine novel tools such as the endonasal wash and elaborate anatomic deformity with advanced performance metrics for ESS.

  18. Development of Metrics for Trust in Automation

    DTIC Science & Technology

    2010-06-01

    Systems Literature Review Defence Research and Development Canada Toronto No. CR-2003-096 Ajzen , I ., & Fishbein , M . (1980). Understanding attitudes...theory and research (pp. 261–287). Thousand Oaks, CA: Sage. Moray, N., Inagaki, T., Itoh, M ., 2000 . Adaptive automation, trust, and self-confidence...Assurance Technical Framework document ( 2000 ), the term ‘trust’ is used 352 times, ranging from reference to the trustworthiness of technology, to

  19. Technology readiness levels for the new millennium program

    NASA Technical Reports Server (NTRS)

    Moynihan, P. I.; Minning, C. P.; Stocky, J. F.

    2003-01-01

    NASA's New Millennium Program (NMP) seeks to advance space exploration by providing an in-space validating mechanism to verify the maturity of promising advanced technologies that cannot be adequately validated with Earth-based testing alone. In meeting this objective, NMP uses NASA Technology Readiness Levels (TRL) as key indicators of technology advancement and assesses development progress against this generalized metric. By providing an opportunity for in-space validation, NMP can mature a suitable advanced technology from TRL 4 (component and/or breadboard validation in laboratory environment) to a TRL 7 (system prototype demonstrated in an Earth-based space environment). Spaceflight technology comprises a myriad of categories, types, and functions, and as each individual technology emerges, a consistent interpretation of its specific state of technological advancement relative to other technologies is problematic.

  20. Adaptive Acquisitions: Maintaining Military Dominance By Managing Innovation

    DTIC Science & Technology

    2014-04-01

    for the relatively unknown disruptive technologies , even for the technical experts. For example, in the early years of rocket research Jerome Hunsaker...improve along existing performance metrics.19 Since disruptive technologies generally underperform along these old value metrics, customers tend to...since the actual value of the innovation is difficult, if not impossible, to determine a priori. In fact, most of the claimed potential disruptive

  1. Structured Innovation of High-Performance Wave Energy Converter Technology: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, Jochem W.; Laird, Daniel

    Wave energy converter (WEC) technology development has not yet delivered the desired commercial maturity nor, and more importantly, the techno-economic performance. The reasons for this have been recognized and fundamental requirements for successful WEC technology development have been identified. This paper describes a multi-year project pursued in collaboration by the National Renewable Energy Laboratory and Sandia National Laboratories to innovate and develop new WEC technology. It specifies the project strategy, shows how this differs from the state-of-the-art approach and presents some early project results. Based on the specification of fundamental functional requirements of WEC technology, structured innovation and systemic problemmore » solving methodologies are applied to invent and identify new WEC technology concepts. Using Technology Performance Levels (TPL) as an assessment metric of the techno-economic performance potential, high performance technology concepts are identified and selected for further development. System performance is numerically modelled and optimized and key performance aspects are empirically validated. The project deliverables are WEC technology specifications of high techno-economic performance technologies of TPL 7 or higher at TRL 3 with some key technology challenges investigated at higher TRL. These wave energy converter technology specifications will be made available to industry for further, full development and commercialisation (TRL 4 - TRL 9).« less

  2. Advanced UVOIR Mirror Technology Development (AMTD) for Very Large Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Smith, W. Scott; Mosier, Gary; Abplanalp, Laura; Arnold, William

    2014-01-01

    ASTRO2010 Decadal stated that an advanced large-aperture ultraviolet, optical, near-infrared (UVOIR) telescope is required to enable the next generation of compelling astrophysics and exoplanet science; and, that present technology is not mature enough to affordably build and launch any potential UVOIR mission concept. AMTD builds on the state of art (SOA) defined by over 30 years of monolithic & segmented ground & space-telescope mirror technology to mature six key technologies. AMTD is deliberately pursuing multiple design paths to provide the science community with op-tions to enable either large aperture monolithic or segmented mirrors with clear engineering metrics traceable to science requirements.

  3. Measuring Cognitive Load and Cognition: Metrics for Technology-Enhanced Learning

    ERIC Educational Resources Information Center

    Martin, Stewart

    2014-01-01

    This critical and reflective literature review examines international research published over the last decade to summarise the different kinds of measures that have been used to explore cognitive load and critiques the strengths and limitations of those focussed on the development of direct empirical approaches. Over the last 40 years, cognitive…

  4. Earned Value Management Considering Technical Readiness Level and Its Application to New Space Launcher Program

    NASA Astrophysics Data System (ADS)

    Choi, Young-In; Ahn, Jaemyung

    2018-04-01

    Earned value management (EVM) is a methodology for monitoring and controlling the performance of a project based on a comparison between planned and actual cost/schedule. This study proposes a concept of hybrid earned value management (H-EVM) that integrates the traditional EVM metrics with information on the technology readiness level. The proposed concept can reflect the progress of a project in a sensitive way and provides short-term perspective complementary to the traditional EVM metrics. A two-dimensional visualization on the cost/schedule status of a project reflecting both of the traditional EVM (long-term perspective) and the proposed H-EVM (short-term perspective) indices is introduced. A case study on the management of a new space launch vehicle development program is conducted to demonstrate the effectiveness of the proposed H-EVM concept, associated metrics, and the visualization technique.

  5. Crew and Thermal Systems Division Strategic Communications Initiatives in Support of NASA's Strategic Goals: Fiscal Year 2012 Summary and Initial Fiscal Year 2013 Metrics

    NASA Technical Reports Server (NTRS)

    Paul, Heather L.

    2013-01-01

    The NASA strategic plan includes overarching strategies to inspire students through interactions with NASA people and projects, and to expand partnerships with industry and academia around the world. The NASA Johnson Space Center Crew and Thermal Systems Division (CTSD) actively supports these NASA initiatives. At the end of fiscal year 2011, CTSD created a strategic communications team to communicate CTSD capabilities, technologies, and personnel to internal NASA and external technical audiences for collaborative and business development initiatives, and to students, educators, and the general public for education and public outreach efforts. The strategic communications initiatives implemented in fiscal year 2012 resulted in 707 in-reach, outreach, and commercialization events with 39,731 participant interactions. This paper summarizes the CTSD Strategic Communications metrics for fiscal year 2012 and provides metrics for the first nine months of fiscal year 2013.

  6. Evaluation of a Tactical Surface Metering Tool for Charlotte Douglas International Airport via Human-in-the-Loop Simulation

    NASA Technical Reports Server (NTRS)

    Verma, Savita; Lee, Hanbong; Dulchinos, Victoria L.; Martin, Lynne; Stevens, Lindsay; Jung, Yoon; Chevalley, Eric; Jobe, Kim; Parke, Bonny

    2017-01-01

    NASA has been working with the FAA and aviation industry partners to develop and demonstrate new concepts and technologies that integrate arrival, departure, and surface traffic management capabilities. In March 2017, NASA conducted a human-in-the-loop (HITL) simulation for integrated surface and airspace operations, modeling Charlotte Douglas International Airport, to evaluate the operational procedures and information requirements for the tactical surface metering tool, and data exchange elements between the airline controlled ramp and ATC Tower. In this paper, we focus on the calibration of the tactical surface metering tool using various metrics measured from the HITL simulation results. Key performance metrics include gate hold times from pushback advisories, taxi-in-out times, runway throughput, and departure queue size. Subjective metrics presented in this paper include workload, situational awareness, and acceptability of the metering tool and its calibration.

  7. Evaluation of a Tactical Surface Metering Tool for Charlotte Douglas International Airport Via Human-in-the-Loop Simulation

    NASA Technical Reports Server (NTRS)

    Verma, Savita; Lee, Hanbong; Martin, Lynne; Stevens, Lindsay; Jung, Yoon; Dulchinos, Victoria; Chevalley, Eric; Jobe, Kim; Parke, Bonny

    2017-01-01

    NASA has been working with the FAA and aviation industry partners to develop and demonstrate new concepts and technologies that integrate arrival, departure, and surface traffic management capabilities. In March 2017, NASA conducted a human-in-the-loop (HITL) simulation for integrated surface and airspace operations, modeling Charlotte Douglas International Airport, to evaluate the operational procedures and information requirements for the tactical surface metering tool, and data exchange elements between the airline controlled ramp and ATC Tower. In this paper, we focus on the calibration of the tactical surface metering tool using various metrics measured from the HITL simulation results. Key performance metrics include gate hold times from pushback advisories, taxi-in/out times, runway throughput, and departure queue size. Subjective metrics presented in this paper include workload, situational awareness, and acceptability of the metering tool and its calibration

  8. CONTACT: An Air Force technical report on military satellite control technology

    NASA Astrophysics Data System (ADS)

    Weakley, Christopher K.

    1993-07-01

    This technical report focuses on Military Satellite Control Technologies and their application to the Air Force Satellite Control Network (AFSCN). This report is a compilation of articles that provide an overview of the AFSCN and the Advanced Technology Program, and discusses relevant technical issues and developments applicable to the AFSCN. Among the topics covered are articles on Future Technology Projections; Future AFSCN Topologies; Modeling of the AFSCN; Wide Area Communications Technology Evolution; Automating AFSCN Resource Scheduling; Health & Status Monitoring at Remote Tracking Stations; Software Metrics and Tools for Measuring AFSCN Software Performance; Human-Computer Interface Working Group; Trusted Systems Workshop; and the University Technical Interaction Program. In addition, Key Technology Area points of contact are listed in the report.

  9. An Abstract Process and Metrics Model for Evaluating Unified Command and Control: A Scenario and Technology Agnostic Approach

    DTIC Science & Technology

    2004-06-01

    18 EBO Cognitive or Memetic input type ..................................................................... 18 Unanticipated EBO generated... Memetic Effects Based COA.................................................................................... 23 Policy...41 Belief systems or Memetic Content Metrics

  10. Multi-Metric Sustainability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cowlin, Shannon; Heimiller, Donna; Macknick, Jordan

    2014-12-01

    A readily accessible framework that allows for evaluating impacts and comparing tradeoffs among factors in energy policy, expansion planning, and investment decision making is lacking. Recognizing this, the Joint Institute for Strategic Energy Analysis (JISEA) funded an exploration of multi-metric sustainability analysis (MMSA) to provide energy decision makers with a means to make more comprehensive comparisons of energy technologies. The resulting MMSA tool lets decision makers simultaneously compare technologies and potential deployment locations.

  11. Can Technology Improve the Quality of Colonoscopy?

    PubMed

    Thirumurthi, Selvi; Ross, William A; Raju, Gottumukkala S

    2016-07-01

    In order for screening colonoscopy to be an effective tool in reducing colon cancer incidence, exams must be performed in a high-quality manner. Quality metrics have been presented by gastroenterology societies and now include higher adenoma detection rate targets than in the past. In many cases, the quality of colonoscopy can often be improved with simple low-cost interventions such as improved procedure technique, implementing split-dose bowel prep, and monitoring individuals' performances. Emerging technology has expanded our field of view and image quality during colonoscopy. We will critically review several technological advances in the context of quality metrics and discuss if technology can really improve the quality of colonoscopy.

  12. Saturn Apollo Program

    NASA Image and Video Library

    1971-01-01

    This 1968 cutaway drawing illustrates the Saturn IB launch vehicle with its two booster stages, the S-IB (first stage) and S-IVB (second stage), and provides the vital statistics in metric units. Developed by the Marshall Space Flight Center (MSFC) as an interim vehicle in MSFC's "building block" approach to the Saturn rocket development, the Saturn IB utilized Saturn I technology to further develop and refine the larger boosters and the Apollo spacecraft capabilities required for the marned lunar missions.

  13. A Framework for Assessment of Aviation Safety Technology Portfolios

    NASA Technical Reports Server (NTRS)

    Jones, Sharon M.; Reveley, Mary S.

    2014-01-01

    The programs within NASA's Aeronautics Research Mission Directorate (ARMD) conduct research and development to improve the national air transportation system so that Americans can travel as safely as possible. NASA aviation safety systems analysis personnel support various levels of ARMD management in their fulfillment of system analysis and technology prioritization as defined in the agency's program and project requirements. This paper provides a framework for the assessment of aviation safety research and technology portfolios that includes metrics such as projected impact on current and future safety, technical development risk and implementation risk. The paper also contains methods for presenting portfolio analysis and aviation safety Bayesian Belief Network (BBN) output results to management using bubble charts and quantitative decision analysis techniques.

  14. Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers

    NASA Technical Reports Server (NTRS)

    Kenny, Sean (Technical Monitor); Wertz, Julie

    2002-01-01

    As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.

  15. Coal upgrading program for Usti nad Labem, Czech Republic: Task 8.3. Topical report, October 1994--August 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, B.C.; Musich, M.A.

    1995-10-01

    Coal has been a major energy source in the Czech Republic given its large coal reserves, especially brown coal and lignite (almost 4000 million metric tons) and smaller reserves of hard, mainly bituminous, coal (over 800 million tons). Political changes since 1989 have led to the reassessment of the role of coal in the future economy as increasing environmental regulations affect the use of the high-sulfur and high-ash brown coal and lignite as well as the high-ash hard coal. Already, the production of brown coal has declined from 87 million metric tons per year in 1989 to 67 million metricmore » tons in 1993 and is projected to decrease further to 50 million metric tons per year of brown coal by the year 2000. As a means of effectively utilizing its indigenous coal resources, the Czech Republic is upgrading various technologies, and these are available at different stages of development, demonstration, and commercialization. The purpose of this review is to provide a database of information on applicable technologies that reduce the impact of gaseous (SO{sub 2}, NO{sub x}, volatile organic compounds) and particulate emissions from the combustion of coal in district and residential heating systems.« less

  16. USE OF LANDSCAPE SCIENCE FRO ENVIRONMENTAL ASSESSMENT PILOT STUDY

    EPA Science Inventory

    Landscape metrics or indicators are calculated by combining various scientific databases using technologies from geographic information systems. These metrics facilitate the understanding that events that might occur in one ecosystem or resource can affect the conditions of many ...

  17. Accelerating Ground-Test Cycle Time: The Six-Minute Model Change and Other Visions for the 21st Century

    NASA Technical Reports Server (NTRS)

    Kegelman, Jerome T.

    1998-01-01

    The advantage of managing organizations to minimize product development cycle time has been well established. This paper provides an overview of the wind tunnel testing cycle time reduction activities at Langley Research Center (LaRC) and gives the status of several improvements in the wind tunnel productivity and cost reductions that have resulted from these activities. Processes have been examined and optimized. Metric data from monitoring processes provides guidance for investments in advanced technologies. The most promising technologies under implementation today include the use of formally designed experiments, a diverse array of quick disconnect technology and the judicious use of advanced electronic and information technologies.

  18. A Psychoacoustic Evaluation of Noise Signatures from Advanced Civil Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Christian, Andrew

    2016-01-01

    The NASA Environmentally Responsible Aviation project has been successful in developing and demonstrating technologies for integrated aircraft systems that can simultaneously meet aggressive goals for fuel burn, noise and emissions. Some of the resulting systems substantially differ from the familiar tube and wing designs constituting the current civil transport fleet. This study attempts to explore whether or not the effective perceived noise level metric used in the NASA noise goal accurately reflects human subject response across the range of vehicles considered. Further, it seeks to determine, in a quantitative manner, if the sounds associated with the advanced aircraft are more or less preferable to the reference vehicles beyond any differences revealed by the metric. These explorations are made through psychoacoustic tests in a controlled laboratory environment using simulated stimuli developed from auralizations of selected vehicles based on systems noise assessments.

  19. Geothermal Resource Reporting Metric (GRRM) Developed for the U.S. Department of Energy's Geothermal Technologies Office

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Katherine R.; Wall, Anna M.; Dobson, Patrick F.

    This paper reviews a methodology being developed for reporting geothermal resources and project progress. The goal is to provide the U.S. Department of Energy's (DOE) Geothermal Technologies Office (GTO) with a consistent and comprehensible means of evaluating the impacts of its funding programs. This framework will allow the GTO to assess the effectiveness of research, development, and deployment (RD&D) funding, prioritize funding requests, and demonstrate the value of RD&D programs to the U.S. Congress and the public. Standards and reporting codes used in other countries and energy sectors provide guidance to develop the relevant geothermal methodology, but industry feedback andmore » our analysis suggest that the existing models have drawbacks that should be addressed. In order to formulate a comprehensive metric for use by the GTO, we analyzed existing resource assessments and reporting methodologies for the geothermal, mining, and oil and gas industries, and sought input from industry, investors, academia, national labs, and other government agencies. Using this background research as a guide, we describe a methodology for evaluating and reporting on GTO funding according to resource grade (geological, technical and socio-economic) and project progress. This methodology would allow GTO to target funding, measure impact by monitoring the progression of projects, or assess geological potential of targeted areas for development.« less

  20. Testing, Requirements, and Metrics

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda; Hyatt, Larry; Hammer, Theodore F.; Huffman, Lenore; Wilson, William

    1998-01-01

    The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. Also critical is complete requirements based testing of the final product. Modern tools for managing requirements allow new metrics to be used in support of both of these critical processes. Using these tools, potential problems with the quality of the requirements and the test plan can be identified early in the life cycle. Some of these quality factors include: ambiguous or incomplete requirements, poorly designed requirements databases, excessive or insufficient test cases, and incomplete linkage of tests to requirements. This paper discusses how metrics can be used to evaluate the quality of the requirements and test to avoid problems later. Requirements management and requirements based testing have always been critical in the implementation of high quality software systems. Recently, automated tools have become available to support requirements management. At NASA's Goddard Space Flight Center (GSFC), automated requirements management tools are being used on several large projects. The use of these tools opens the door to innovative uses of metrics in characterizing test plan quality and assessing overall testing risks. In support of these projects, the Software Assurance Technology Center (SATC) is working to develop and apply a metrics program that utilizes the information now available through the application of requirements management tools. Metrics based on this information provides real-time insight into the testing of requirements and these metrics assist the Project Quality Office in its testing oversight role. This paper discusses three facets of the SATC's efforts to evaluate the quality of the requirements and test plan early in the life cycle, thus preventing costly errors and time delays later.

  1. Organized DFM

    NASA Astrophysics Data System (ADS)

    Sato, Takashi; Honma, Michio; Itoh, Hiroyuki; Iriki, Nobuyuki; Kobayashi, Sachiko; Miyazaki, Norihiko; Onodera, Toshio; Suzuki, Hiroyuki; Yoshioka, Nobuyuki; Arima, Sumika; Kadota, Kazuya

    2009-04-01

    The category and objective of DFM production management are shown. DFM is not limited to an activity within a particular unit process in design and process. A new framework for DFM is required. DFM should be a total solution for the common problems of all processes. Each of them must be linked to one another organically. After passing through the whole of each process on the manufacturing platform, quality of final products is guaranteed and products are shipped to the market. The information platform is layered with DFM, APC, and AEC. Advanced DFM is not DFM for partial optimization of the lithography process and the design, etc. and it should be Organized DFM. They are managed with high-level organizational IQ. The interim quality between each step of the flow should be visualized. DFM will be quality engineering if it is Organized DFM and common metrics of the quality are provided. DFM becomes quality engineering through effective implementation of common industrial metrics and standardized technology. DFM is differential technology, but can leverage standards for efficient development.

  2. Large Mass, Entry, Descent and Landing Sensitivity Results for Environmental, Performance, and Design Parameters

    NASA Technical Reports Server (NTRS)

    Shidner, Jeremy D.; Davis, Jody L.; Cianciolo, Alicia D.; Samareh, Jamshid A.; Powell, RIchard W.

    2010-01-01

    Landing on Mars has been a challenging task. Past NASA missions have shown resilience to increases in spacecraft mass by scaling back requirements such as landing site altitude, landing site location and arrival time. Knowledge of the partials relating requirements to mass is critical for mission designers to understand so that the project can retain margin throughout the process. Looking forward to new missions that will land 1.5 metric tons or greater, the current level of technology is insufficient, and new technologies will need to be developed. Understanding the sensitivity of these new technologies to requirements is the purpose of this paper.

  3. Integrated Tools for Future Distributed Engine Control Technologies

    NASA Technical Reports Server (NTRS)

    Culley, Dennis; Thomas, Randy; Saus, Joseph

    2013-01-01

    Turbine engines are highly complex mechanical systems that are becoming increasingly dependent on control technologies to achieve system performance and safety metrics. However, the contribution of controls to these measurable system objectives is difficult to quantify due to a lack of tools capable of informing the decision makers. This shortcoming hinders technology insertion in the engine design process. NASA Glenn Research Center is developing a Hardware-inthe- Loop (HIL) platform and analysis tool set that will serve as a focal point for new control technologies, especially those related to the hardware development and integration of distributed engine control. The HIL platform is intended to enable rapid and detailed evaluation of new engine control applications, from conceptual design through hardware development, in order to quantify their impact on engine systems. This paper discusses the complex interactions of the control system, within the context of the larger engine system, and how new control technologies are changing that paradigm. The conceptual design of the new HIL platform is then described as a primary tool to address those interactions and how it will help feed the insertion of new technologies into future engine systems.

  4. 15 CFR 273.2 - Definition.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 1 2014-01-01 2014-01-01 false Definition. 273.2 Section 273.2 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE METRIC CONVERSION POLICY FOR FEDERAL AGENCIES METRIC...

  5. Touch Interaction with 3D Geographical Visualization on Web: Selected Technological and User Issues

    NASA Astrophysics Data System (ADS)

    Herman, L.; Stachoň, Z.; Stuchlík, R.; Hladík, J.; Kubíček, P.

    2016-10-01

    The use of both 3D visualization and devices with touch displays is increasing. In this paper, we focused on the Web technologies for 3D visualization of spatial data and its interaction via touch screen gestures. At the first stage, we compared the support of touch interaction in selected JavaScript libraries on different hardware (desktop PCs with touch screens, tablets, and smartphones) and software platforms. Afterward, we realized simple empiric test (within-subject design, 6 participants, 2 simple tasks, LCD touch monitor Acer and digital terrain models as stimuli) focusing on the ability of users to solve simple spatial tasks via touch screens. An in-house testing web tool was developed and used based on JavaScript, PHP, and X3DOM languages and Hammer.js libraries. The correctness of answers, speed of users' performances, used gestures, and a simple gesture metric was recorded and analysed. Preliminary results revealed that the pan gesture is most frequently used by test participants and it is also supported by the majority of 3D libraries. Possible gesture metrics and future developments including the interpersonal differences are discussed in the conclusion.

  6. An exploratory survey of methods used to develop measures of performance

    NASA Astrophysics Data System (ADS)

    Hamner, Kenneth L.; Lafleur, Charles A.

    1993-09-01

    Nonmanufacturing organizations are being challenged to provide high-quality products and services to their customers, with an emphasis on continuous process improvement. Measures of performance, referred to as metrics, can be used to foster process improvement. The application of performance measurement to nonmanufacturing processes can be very difficult. This research explored methods used to develop metrics in nonmanufacturing organizations. Several methods were formally defined in the literature, and the researchers used a two-step screening process to determine the OMB Generic Method was most likely to produce high-quality metrics. The OMB Generic Method was then used to develop metrics. A few other metric development methods were found in use at nonmanufacturing organizations. The researchers interviewed participants in metric development efforts to determine their satisfaction and to have them identify the strengths and weaknesses of, and recommended improvements to, the metric development methods used. Analysis of participants' responses allowed the researchers to identify the key components of a sound metrics development method. Those components were incorporated into a proposed metric development method that was based on the OMB Generic Method, and should be more likely to produce high-quality metrics that will result in continuous process improvement.

  7. Climate impacts of energy technologies depend on emissions timing

    NASA Astrophysics Data System (ADS)

    Edwards, Morgan R.; Trancik, Jessika E.

    2014-05-01

    Energy technologies emit greenhouse gases with differing radiative efficiencies and atmospheric lifetimes. Standard practice for evaluating technologies, which uses the global warming potential (GWP) to compare the integrated radiative forcing of emitted gases over a fixed time horizon, does not acknowledge the importance of a changing background climate relative to climate change mitigation targets. Here we demonstrate that the GWP misvalues the impact of CH4-emitting technologies as mid-century approaches, and we propose a new class of metrics to evaluate technologies based on their time of use. The instantaneous climate impact (ICI) compares gases in an expected radiative forcing stabilization year, and the cumulative climate impact (CCI) compares their time-integrated radiative forcing up to a stabilization year. Using these dynamic metrics, we quantify the climate impacts of technologies and show that high-CH4-emitting energy sources become less advantageous over time. The impact of natural gas for transportation, with CH4 leakage, exceeds that of gasoline within 1-2 decades for a commonly cited 3 W m-2 stabilization target. The impact of algae biodiesel overtakes that of corn ethanol within 2-3 decades, where algae co-products are used to produce biogas and corn co-products are used for animal feed. The proposed metrics capture the changing importance of CH4 emissions as a climate threshold is approached, thereby addressing a major shortcoming of the GWP for technology evaluation.

  8. DCS: A Case Study of Identification of Knowledge and Disposition Gaps Using Principles of Continuous Risk Management

    NASA Technical Reports Server (NTRS)

    Norcross, Jason; Steinberg, Susan; Kundrot, Craig; Charles, John

    2011-01-01

    The Human Research Program (HRP) is formulated around the program architecture of Evidence-Risk-Gap-Task-Deliverable. Review of accumulated evidence forms the basis for identification of high priority risks to human health and performance in space exploration. Gaps in knowledge or disposition are identified for each risk, and a portfolio of research tasks is developed to fill them. Deliverables from the tasks inform the evidence base with the ultimate goal of defining the level of risk and reducing it to an acceptable level. A comprehensive framework for gap identification, focus, and metrics has been developed based on principles of continuous risk management and clinical care. Research towards knowledge gaps improves understanding of the likelihood, consequence or timeframe of the risk. Disposition gaps include development of standards or requirements for risk acceptance, development of countermeasures or technology to mitigate the risk, and yearly technology assessment related to watching developments related to the risk. Standard concepts from clinical care: prevention, diagnosis, treatment, monitoring, rehabilitation, and surveillance, can be used to focus gaps dealing with risk mitigation. The research plan for the new HRP Risk of Decompression Sickness (DCS) used the framework to identify one disposition gap related to establishment of a DCS standard for acceptable risk, two knowledge gaps related to DCS phenomenon and mission attributes, and three mitigation gaps focused on prediction, prevention, and new technology watch. These gaps were organized in this manner primarily based on target for closure and ease of organizing interim metrics so that gap status could be quantified. Additional considerations for the knowledge gaps were that one was highly design reference mission specific and the other gap was focused on DCS phenomenon.

  9. Measuring Impact of U.S. DOE Geothermal Technologies Office Funding: Considerations for Development of a Geothermal Resource Reporting Metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Katherine R.; Wall, Anna M.; Dobson, Patrick F.

    This paper reviews existing methodologies and reporting codes used to describe extracted energy resources such as coal and oil and describes a comparable proposed methodology to describe geothermal resources. The goal is to provide the U.S. Department of Energy's (DOE) Geothermal Technologies Office (GTO) with a consistent and comprehensible means of assessing the impacts of its funding programs. This framework will allow for GTO to assess the effectiveness of research, development, and deployment (RD&D) funding, prioritize funding requests, and demonstrate the value of RD&D programs to the U.S. Congress. Standards and reporting codes used in other countries and energy sectorsmore » provide guidance to inform development of a geothermal methodology, but industry feedback and our analysis suggest that the existing models have drawbacks that should be addressed. In order to formulate a comprehensive metric for use by GTO, we analyzed existing resource assessments and reporting methodologies for the geothermal, mining, and oil and gas industries, and we sought input from industry, investors, academia, national labs, and other government agencies. Using this background research as a guide, we describe a methodology for assessing and reporting on GTO funding according to resource knowledge and resource grade (or quality). This methodology would allow GTO to target funding or measure impact by progression of projects or geological potential for development.« less

  10. 15 CFR 1170.2 - Definition.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Definition. 1170.2 Section 1170.2 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) TECHNOLOGY ADMINISTRATION, DEPARTMENT OF COMMERCE METRIC CONVERSION POLICY FOR FEDERAL AGENCIES § 1170.2 Definition. Metric...

  11. Does external walking environment affect gait patterns?

    PubMed

    Patterson, Matthew R; Whelan, Darragh; Reginatto, Brenda; Caprani, Niamh; Walsh, Lorcan; Smeaton, Alan F; Inomata, Akihiro; Caulfield, Brian

    2014-01-01

    The objective of this work is to develop an understanding of the relationship between mobility metrics obtained outside of the clinic or laboratory and the context of the external environment. Ten subjects walked with an inertial sensor on each shank and a wearable camera around their neck. They were taken on a thirty minute walk in which they mobilized over the following conditions; normal path, busy hallway, rough ground, blind folded and on a hill. Stride time, stride time variability, stance time and peak shank rotation rate during swing were calculated using previously published algorithms. Stride time was significantly different between several of the conditions. Technological advances mean that gait variables can now be captured as patients go about their daily lives. The results of this study show that the external environment has a significant impact on the quality of gait metrics. Thus, context of external walking environment is an important consideration when analyzing ambulatory gait metrics from the unsupervised home and community setting.

  12. A factory concept for processing and manufacturing with lunar material

    NASA Technical Reports Server (NTRS)

    Driggers, G. W.

    1977-01-01

    A conceptual design for an orbital factory sized to process 1.5 million metric tons per year of raw lunar fines into 0.3 million metric tons of manufacturing materials is presented. A conservative approach involving application of present earth-based technology leads to a design devoid of new inventions. Earth based counterparts to the factory machinery were used to generate subsystem masses and lumped parameters for volume and mass estimates. The results are considered to be conservative since technologies more advanced than those assumed are presently available in many areas. Some attributes of potential space processing technologies applied to material refinement and component manufacture are discussed.

  13. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Technical Performance Assessment

    PubMed Central

    2017-01-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers (QIBs) to measure changes in these features. Critical to the performance of a QIB in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method and metrics used to assess a QIB for clinical use. It is therefore, difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America (RSNA) and the Quantitative Imaging Biomarker Alliance (QIBA) with technical, radiological and statistical experts developed a set of technical performance analysis methods, metrics and study designs that provide terminology, metrics and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of QIB performance studies so that results from multiple studies can be compared, contrasted or combined. PMID:24919831

  14. Evaluating Soil Health Using Remotely Sensed Evapotranspiration on the Benchmark Barnes Soils of North Dakota

    NASA Astrophysics Data System (ADS)

    Bohn, Meyer; Hopkins, David; Steele, Dean; Tuscherer, Sheldon

    2017-04-01

    The benchmark Barnes soil series is an extensive upland Hapludoll of the northern Great Plains that is both economically and ecologically vital to the region. Effects of tillage erosion coupled with wind and water erosion have degraded Barnes soil quality, but with unknown extent, distribution, or severity. Evidence of soil degradation documented for a half century warrants that the assumption of productivity be tested. Soil resilience is linked to several dynamic soil properties and National Cooperative Soil Survey initiatives are now focused on identifying those properties for benchmark soils. Quantification of soil degradation is dependent on a reliable method for broad-scale evaluation. The soil survey community is currently developing rapid and widespread soil property assessment technologies. Improvements in satellite based remote-sensing and image analysis software have stimulated the application of broad-scale resource assessment. Furthermore, these technologies have fostered refinement of land-based surface energy balance algorithms, i.e. Mapping Evapotranspiration at High Resolution with Internalized Calibration (METRIC) algorithm for evapotranspiration (ET) mapping. The hypothesis of this study is that ET mapping technology can differentiate soil function on extensive landscapes and identify degraded areas. A recent soil change study in eastern North Dakota resampled legacy Barnes pedons sampled prior to 1960 and found significant decreases in organic carbon. An ancillary study showed that evapotranspiration (ET) estimates from METRIC decreased with Barnes erosion class severity. An ET raster map has been developed for three eastern North Dakota counties using METRIC and Landsat 5 imagery. ET pixel candidates on major Barnes soil map units were stratified into tertiles and classified as ranked ET subdivisions. A sampling population of randomly selected points stratified by ET class and county proportion was established. Morphologic and chemical data will be recorded at each sampling site to test whether soil properties correlate to ET, thus serving as a non-biased proxy for soil health.

  15. A novel approach for evaluating the performance of real time quantitative loop-mediated isothermal amplification-based methods.

    PubMed

    Nixon, Gavin J; Svenstrup, Helle F; Donald, Carol E; Carder, Caroline; Stephenson, Judith M; Morris-Jones, Stephen; Huggett, Jim F; Foy, Carole A

    2014-12-01

    Molecular diagnostic measurements are currently underpinned by the polymerase chain reaction (PCR). There are also a number of alternative nucleic acid amplification technologies, which unlike PCR, work at a single temperature. These 'isothermal' methods, reportedly offer potential advantages over PCR such as simplicity, speed and resistance to inhibitors and could also be used for quantitative molecular analysis. However there are currently limited mechanisms to evaluate their quantitative performance, which would assist assay development and study comparisons. This study uses a sexually transmitted infection diagnostic model in combination with an adapted metric termed isothermal doubling time (IDT), akin to PCR efficiency, to compare quantitative PCR and quantitative loop-mediated isothermal amplification (qLAMP) assays, and to quantify the impact of matrix interference. The performance metric described here facilitates the comparison of qLAMP assays that could assist assay development and validation activities.

  16. Space Transportation Operations: Assessment of Methodologies and Models

    NASA Technical Reports Server (NTRS)

    Joglekar, Prafulla

    2001-01-01

    The systems design process for future space transportation involves understanding multiple variables and their effect on lifecycle metrics. Variables such as technology readiness or potential environmental impact are qualitative, while variables such as reliability, operations costs or flight rates are quantitative. In deciding what new design concepts to fund, NASA needs a methodology that would assess the sum total of all relevant qualitative and quantitative lifecycle metrics resulting from each proposed concept. The objective of this research was to review the state of operations assessment methodologies and models used to evaluate proposed space transportation systems and to develop recommendations for improving them. It was found that, compared to the models available from other sources, the operations assessment methodology recently developed at Kennedy Space Center has the potential to produce a decision support tool that will serve as the industry standard. Towards that goal, a number of areas of improvement in the Kennedy Space Center's methodology are identified.

  17. Space Transportation Operations: Assessment of Methodologies and Models

    NASA Technical Reports Server (NTRS)

    Joglekar, Prafulla

    2002-01-01

    The systems design process for future space transportation involves understanding multiple variables and their effect on lifecycle metrics. Variables such as technology readiness or potential environmental impact are qualitative, while variables such as reliability, operations costs or flight rates are quantitative. In deciding what new design concepts to fund, NASA needs a methodology that would assess the sum total of all relevant qualitative and quantitative lifecycle metrics resulting from each proposed concept. The objective of this research was to review the state of operations assessment methodologies and models used to evaluate proposed space transportation systems and to develop recommendations for improving them. It was found that, compared to the models available from other sources, the operations assessment methodology recently developed at Kennedy Space Center has the potential to produce a decision support tool that will serve as the industry standard. Towards that goal, a number of areas of improvement in the Kennedy Space Center's methodology are identified.

  18. Characterization of relief printing

    NASA Astrophysics Data System (ADS)

    Liu, Xing; Chen, Lin; Ortiz-Segovia, Maria-Valezzka; Ferwerda, James; Allebach, Jan

    2014-03-01

    Relief printing technology developed by Océ allows the superposition of several layers of colorant on different types of media which creates a variation of the surface height defined by the input to the printer. Evaluating the reproduction accuracy of distinct surface characteristics is of great importance to the application of the relief printing system. Therefore, it is necessary to develop quality metrics to evaluate the relief process. In this paper, we focus on the third dimension of relief printing, i.e. height information. To achieve this goal, we define metrics and develop models that aim to evaluate relief prints in two aspects: overall fidelity and surface finish. To characterize the overall fidelity, three metrics are calculated: Modulation Transfer Function (MTF), difference and root-mean-squared error (RMSE) between the input height map and scanned height map, and print surface angle accuracy. For the surface finish property, we measure the surface roughness, generate surface normal maps and develop a light reflection model that serves as a simulation of the differences between ideal prints and real prints that may be perceived by human observers. Three sets of test targets are designed and printed by the Océ relief printer prototypes for the calculation of the above metrics: (i) twisted target, (ii) sinusoidal wave target, and (iii) ramp target. The results provide quantitative evaluations of the printing quality in the third dimension, and demonstrate that the height of relief prints is reproduced accurately with respect to the input design. The factors that affect the printing quality include: printing direction, frequency and amplitude of the input signal, shape of relief prints. Besides the above factors, there are two additional aspects that influence the viewing experience of relief prints: lighting condition and viewing angle.

  19. Tools for monitoring system suitability in LC MS/MS centric proteomic experiments.

    PubMed

    Bereman, Michael S

    2015-03-01

    With advances in liquid chromatography coupled to tandem mass spectrometry technologies combined with the continued goals of biomarker discovery, clinical applications of established biomarkers, and integrating large multiomic datasets (i.e. "big data"), there remains an urgent need for robust tools to assess instrument performance (i.e. system suitability) in proteomic workflows. To this end, several freely available tools have been introduced that monitor a number of peptide identification (ID) and/or peptide ID free metrics. Peptide ID metrics include numbers of proteins, peptides, or peptide spectral matches identified from a complex mixture. Peptide ID free metrics include retention time reproducibility, full width half maximum, ion injection times, and integrated peptide intensities. The main driving force in the development of these tools is to monitor both intra- and interexperiment performance variability and to identify sources of variation. The purpose of this review is to summarize and evaluate these tools based on versatility, automation, vendor neutrality, metrics monitored, and visualization capabilities. In addition, the implementation of a robust system suitability workflow is discussed in terms of metrics, type of standard, and frequency of evaluation along with the obstacles to overcome prior to incorporating a more proactive approach to overall quality control in liquid chromatography coupled to tandem mass spectrometry based proteomic workflows. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Environmental, Economic, and Scalability Considerations and Trends of Selected Fuel Economy-Enhancing Biomass-Derived Blendstocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunn, Jennifer B.; Biddy, Mary; Jones, Susanne

    Twenty-four biomass-derived compounds and mixtures, identified based on their physical properties, which could be blended into fuels to improve spark ignition engine fuel economy, were assessed for their economic, technology readiness, and environmental viability. These bio-blendstocks were modeled to be produced biochemically, thermochemically, or through hybrid processes. To carry out the assessment, 17 metrics were developed for which each bio-blendstock was determined to be favorable, neutral, or unfavorable. Cellulosic ethanol was included as a reference case. Overall economic and, to some extent, environmental viability is driven by projected yields for each of these processes. The metrics used in this analysismore » methodology highlight the near-term potential to achieve these targeted yield estimates when considering data quality and current technical readiness for these conversion strategies. Key knowledge gaps included the degree of purity needed for use as a bio-blendstock. Less stringent purification requirements for fuels could cut processing costs and environmental impacts. Additionally, more information is needed on the blending behavior of many of these bio-blendstocks with gasoline to support the technology readiness evaluation. Altogether, the technology to produce many of these blendstocks from biomass is emerging, and as it matures, these assessments must be revisited. Importantly, considering economic, environmental, and technology readiness factors, in addition to physical properties of blendstocks that could be used to boost engine efficiency and fuel economy, in the early stages of project research and development can help spotlight those most likely to be viable in the near term.« less

  1. Environmental, Economic, and Scalability Considerations and Trends of Selected Fuel Economy-Enhancing Biomass-Derived Blendstocks

    DOE PAGES

    Dunn, Jennifer B.; Biddy, Mary; Jones, Susanne; ...

    2017-10-30

    Twenty-four biomass-derived compounds and mixtures, identified based on their physical properties, which could be blended into fuels to improve spark ignition engine fuel economy, were assessed for their economic, technology readiness, and environmental viability. These bio-blendstocks were modeled to be produced biochemically, thermochemically, or through hybrid processes. To carry out the assessment, 17 metrics were developed for which each bio-blendstock was determined to be favorable, neutral, or unfavorable. Cellulosic ethanol was included as a reference case. Overall economic and, to some extent, environmental viability is driven by projected yields for each of these processes. The metrics used in this analysismore » methodology highlight the near-term potential to achieve these targeted yield estimates when considering data quality and current technical readiness for these conversion strategies. Key knowledge gaps included the degree of purity needed for use as a bio-blendstock. Less stringent purification requirements for fuels could cut processing costs and environmental impacts. Additionally, more information is needed on the blending behavior of many of these bio-blendstocks with gasoline to support the technology readiness evaluation. Altogether, the technology to produce many of these blendstocks from biomass is emerging, and as it matures, these assessments must be revisited. Importantly, considering economic, environmental, and technology readiness factors, in addition to physical properties of blendstocks that could be used to boost engine efficiency and fuel economy, in the early stages of project research and development can help spotlight those most likely to be viable in the near term.« less

  2. As Technologies for Nucleotide Therapeutics Mature, Products Emerge.

    PubMed

    Beierlein, Jennifer M; McNamee, Laura M; Ledley, Fred D

    2017-12-15

    The long path from initial research on oligonucleotide therapies to approval of antisense products is not unfamiliar. This lag resembles those encountered with monoclonal antibodies, gene therapies, and many biological targets and is consistent with studies of innovation showing that technology maturation is a critical determinant of product success. We previously described an analytical model for the maturation of biomedical research, demonstrating that the efficiency of targeted and biological development is connected to metrics of technology growth. The present work applies this model to characterize the advance of oligonucleotide therapeutics. We show that recent oligonucleotide product approvals incorporate technologies and targets that are past the established point of technology growth, as do most of the oligonucleotide products currently in phase 3. Less mature oligonucleotide technologies, such as miRNAs and some novel gene targets, have not passed the established point and have not yielded products. This analysis shows that oligonucleotide product development has followed largely predictable patterns of innovation. While technology maturation alone does not ensure success, these data show that many oligonucleotide technologies are sufficiently mature to be considered part of the arsenal for therapeutic development. These results demonstrate the importance of technology assessment in strategic management of biomedical technologies. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Use of Business Intelligence Tools in the DSN

    NASA Technical Reports Server (NTRS)

    Statman, Joseph I.; Zendejas, Silvino C.

    2010-01-01

    JPL has operated the Deep Space Network (DSN) on behalf of NASA since the 1960's. Over the last two decades, the DSN budget has generally declined in real-year dollars while the aging assets required more attention, and the missions became more complex. As a result, the DSN budget has been increasingly consumed by Operations and Maintenance (O&M), significantly reducing the funding wedge available for technology investment and for enhancing the DSN capability and capacity. Responding to this budget squeeze, the DSN launched an effort to improve the cost-efficiency of the O&M. In this paper we: elaborate on the methodology adopted to understand "where the time and money are used"-surprisingly, most of the data required for metrics development was readily available in existing databases-we have used commercial Business Intelligence (BI) tools to mine the databases and automatically extract the metrics (including trends) and distribute them weekly to interested parties; describe the DSN-specific effort to convert the intuitive understanding of "where the time is spent" into meaningful and actionable metrics that quantify use of resources, highlight candidate areas of improvement, and establish trends; and discuss the use of the BI-derived metrics-one of the most fascinating processes was the dramatic improvement in some areas of operations when the metrics were shared with the operators-the visibility of the metrics, and a self-induced competition, caused almost immediate improvement in some areas. While the near-term use of the metrics is to quantify the processes and track the improvement, these techniques will be just as useful in monitoring the process, e.g. as an input to a lean-six-sigma process.

  4. Optimal Sensor Selection for Health Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  5. Transport impacts on atmosphere and climate: Metrics

    NASA Astrophysics Data System (ADS)

    Fuglestvedt, J. S.; Shine, K. P.; Berntsen, T.; Cook, J.; Lee, D. S.; Stenke, A.; Skeie, R. B.; Velders, G. J. M.; Waitz, I. A.

    2010-12-01

    The transport sector emits a wide variety of gases and aerosols, with distinctly different characteristics which influence climate directly and indirectly via chemical and physical processes. Tools that allow these emissions to be placed on some kind of common scale in terms of their impact on climate have a number of possible uses such as: in agreements and emission trading schemes; when considering potential trade-offs between changes in emissions resulting from technological or operational developments; and/or for comparing the impact of different environmental impacts of transport activities. Many of the non-CO 2 emissions from the transport sector are short-lived substances, not currently covered by the Kyoto Protocol. There are formidable difficulties in developing metrics and these are particularly acute for such short-lived species. One difficulty concerns the choice of an appropriate structure for the metric (which may depend on, for example, the design of any climate policy it is intended to serve) and the associated value judgements on the appropriate time periods to consider; these choices affect the perception of the relative importance of short- and long-lived species. A second difficulty is the quantification of input parameters (due to underlying uncertainty in atmospheric processes). In addition, for some transport-related emissions, the values of metrics (unlike the gases included in the Kyoto Protocol) depend on where and when the emissions are introduced into the atmosphere - both the regional distribution and, for aircraft, the distribution as a function of altitude, are important. In this assessment of such metrics, we present Global Warming Potentials (GWPs) as these have traditionally been used in the implementation of climate policy. We also present Global Temperature Change Potentials (GTPs) as an alternative metric, as this, or a similar metric may be more appropriate for use in some circumstances. We use radiative forcings and lifetimes from the literature to derive GWPs and GTPs for the main transport-related emissions, and discuss the uncertainties in these estimates. We find large variations in metric (GWP and GTP) values for NO x, mainly due to the dependence on location of emissions but also because of inter-model differences and differences in experimental design. For aerosols we give only global-mean values due to an inconsistent picture amongst available studies regarding regional dependence. The uncertainty in the presented metric values reflects the current state of understanding; the ranking of the various components with respect to our confidence in the given metric values is also given. While the focus is mostly on metrics for comparing the climate impact of emissions, many of the issues are equally relevant for stratospheric ozone depletion metrics, which are also discussed.

  6. Robotics Laboratory to Enhance the STEM Research Experience

    DTIC Science & Technology

    2015-04-30

    the Chemistry Program has a student working on the design and development of a Stirling Engine , which the student is planning to construct using...scale): Number of graduating undergraduates funded by a DoD funded Center of Excellence grant for Education, Research and Engineering : The number of... engineering or technology fields: Student Metrics This section only applies to graduating undergraduates supported by this agreement in this reporting

  7. Youth, Technology and HIV: Recent Advances and Future Directions

    PubMed Central

    Hightow-Weidman, Lisa B.; Muessig, Kathryn E.; Bauermeister, Jose; Zhang, Chen; LeGrand, Sara

    2015-01-01

    Technology, including mobile technologies and social media, offers powerful tools to reach, engage, and retain youth and young adults in HIV prevention and care interventions both in the United States and globally. In this report we focus on HIV, technology, and youth, presenting a synthesis of recently published (Jan 2014-May 2015) observational and experimental studies relevant for understanding and intervening on HIV risk, prevention and care. We present findings from a selection of the 66 relevant citations identified, highlighting studies that demonstrate a novel approach to technology interventions among youth in regard to content, delivery, target population or public health impact. We discuss current trends globally and in the US in how youth are using technology, as well as emergent research issues in this field – including the need for new theories for developing technology-based HIV interventions and new metrics of engagement, exposure, and evaluation. PMID:26385582

  8. Sensitivity Analysis and Optimization of the Nuclear Fuel Cycle: A Systematic Approach

    NASA Astrophysics Data System (ADS)

    Passerini, Stefano

    For decades, nuclear energy development was based on the expectation that recycling of the fissionable materials in the used fuel from today's light water reactors into advanced (fast) reactors would be implemented as soon as technically feasible in order to extend the nuclear fuel resources. More recently, arguments have been made for deployment of fast reactors in order to reduce the amount of higher actinides, hence the longevity of radioactivity, in the materials destined to a geologic repository. The cost of the fast reactors, together with concerns about the proliferation of the technology of extraction of plutonium from used LWR fuel as well as the large investments in construction of reprocessing facilities have been the basis for arguments to defer the introduction of recycling technologies in many countries including the US. In this thesis, the impacts of alternative reactor technologies on the fuel cycle are assessed. Additionally, metrics to characterize the fuel cycles and systematic approaches to using them to optimize the fuel cycle are presented. The fuel cycle options of the 2010 MIT fuel cycle study are re-examined in light of the expected slower rate of growth in nuclear energy today, using the CAFCA (Code for Advanced Fuel Cycle Analysis). The Once Through Cycle (OTC) is considered as the base-line case, while advanced technologies with fuel recycling characterize the alternative fuel cycle options available in the future. The options include limited recycling in L WRs and full recycling in fast reactors and in high conversion LWRs. Fast reactor technologies studied include both oxide and metal fueled reactors. Additional fuel cycle scenarios presented for the first time in this work assume the deployment of innovative recycling reactor technologies such as the Reduced Moderation Boiling Water Reactors and Uranium-235 initiated Fast Reactors. A sensitivity study focused on system and technology parameters of interest has been conducted to test the robustness of the conclusions presented in the MIT Fuel Cycle Study. These conclusions are found to still hold, even when considering alternative technologies and different sets of simulation assumptions. Additionally, a first of a kind optimization scheme for the nuclear fuel cycle analysis is proposed and the applications of such an optimization are discussed. Optimization metrics of interest for different stakeholders in the fuel cycle (economics, fuel resource utilization, high level waste, transuranics/proliferation management, and environmental impact) are utilized for two different optimization techniques: a linear one and a stochastic one. Stakeholder elicitation provided sets of relative weights for the identified metrics appropriate to each stakeholder group, which were then successfully used to arrive at optimum fuel cycle configurations for recycling technologies. The stochastic optimization tool, based on a genetic algorithm, was used to identify non-inferior solutions according to Pareto's dominance approach to optimization. The main tradeoff for fuel cycle optimization was found to be between economics and most of the other identified metrics. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  9. New Developments in the Technology Readiness Assessment Process in US DOE-EM - 13247

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krahn, Steven; Sutter, Herbert; Johnson, Hoyt

    2013-07-01

    A Technology Readiness Assessment (TRA) is a systematic, metric-based process and accompanying report that evaluates the maturity of the technologies used in systems; it is designed to measure technology maturity using the Technology Readiness Level (TRL) scale pioneered by the National Aeronautics and Space Administration (NASA) in the 1980's. More recently, DoD has adopted and provided systematic guidance for performing TRAs and determining TRLs. In 2007 the GAO recommended that the DOE adopt the NASA/DoD methodology for evaluating technology maturity. Earlier, in 2006-2007, DOE-EM had conducted pilot TRAs on a number of projects at Hanford and Savannah River. In Marchmore » 2008, DOE-EM issued a process guide, which established TRAs as an integral part of DOE-EM's Project Management Critical Decision Process. Since the development of its detailed TRA guidance in 2008, DOE-EM has continued to accumulate experience in the conduct of TRAs and the process for evaluating technology maturity. DOE has developed guidance on TRAs applicable department-wide. DOE-EM's experience with the TRA process, the evaluations that led to recently developed proposed revisions to the DOE-EM TRA/TMP Guide; the content of the proposed changes that incorporate the above lessons learned and insights are described. (authors)« less

  10. Scientific Wealth in Middle East and North Africa: Productivity, Indigeneity, and Specialty in 1981-2013.

    PubMed

    Siddiqi, Afreen; Stoppani, Jonathan; Anadon, Laura Diaz; Narayanamurti, Venkatesh

    2016-01-01

    Several developing countries seek to build knowledge-based economies by attempting to expand scientific research capabilities. Characterizing the state and direction of progress in this arena is challenging but important. Here, we employ three metrics: a classical metric of productivity (publications per person), an adapted metric which we denote as Revealed Scientific Advantage (developed from work used to compare publications in scientific fields among countries) to characterize disciplinary specialty, and a new metric, scientific indigeneity (defined as the ratio of publications with domestic corresponding authors) to characterize the locus of scientific activity that also serves as a partial proxy for local absorptive capacity. These metrics-using population and publications data that are available for most countries-allow the characterization of some key features of national scientific enterprise. The trends in productivity and indigeneity when compared across other countries and regions can serve as indicators of strength or fragility in the national research ecosystems, and the trends in specialty can allow regional policy makers to assess the extent to which the areas of focus of research align (or not align) with regional priorities. We apply the metrics to study the Middle East and North Africa (MENA)-a region where science and technology capacity will play a key role in national economic diversification. We analyze 9.8 million publication records between 1981-2013 in 17 countries of MENA from Morocco to Iraq and compare it to selected countries throughout the world. The results show that international collaborators increasingly drove the scientific activity in MENA. The median indigeneity reached 52% in 2013 (indicating that almost half of the corresponding authors were located in foreign countries). Additionally, the regional disciplinary focus in chemical and petroleum engineering is waning with modest growth in the life sciences. We find repeated patterns of stagnation and contraction of scientific activity for several MENA countries contributing to a widening productivity gap on an international comparative yardstick. The results prompt questions about the strength of the developing scientific enterprise and highlight the need for consistent long-term policy for effectively addressing regional challenges with domestic research.

  11. A Simple Composite Metric for the Assessment of Glycemic Status from Continuous Glucose Monitoring Data: Implications for Clinical Practice and the Artificial Pancreas.

    PubMed

    Hirsch, Irl B; Balo, Andrew K; Sayer, Kevin; Garcia, Arturo; Buckingham, Bruce A; Peyser, Thomas A

    2017-06-01

    The potential clinical benefits of continuous glucose monitoring (CGM) have been recognized for many years, but CGM is used by a small fraction of patients with diabetes. One obstacle to greater use of the technology is the lack of simplified tools for assessing glycemic control from CGM data without complicated visual displays of data. We developed a simple new metric, the personal glycemic state (PGS), to assess glycemic control solely from continuous glucose monitoring data. PGS is a composite index that assesses four domains of glycemic control: mean glucose, glycemic variability, time in range and frequency and severity of hypoglycemia. The metric was applied to data from six clinical studies for the G4 Platinum continuous glucose monitoring system (Dexcom, San Diego, CA). The PGS was also applied to data from a study of artificial pancreas comparing results from open loop and closed loop in adolescents and in adults. The new metric for glycemic control, PGS, was able to characterize the quality of glycemic control in a wide range of study subjects with various mean glucose, minimal, moderate, and excessive glycemic variability and subjects on open loop versus closed loop control. A new composite metric for the assessment of glycemic control based on CGM data has been defined for use in assessing glycemic control in clinical practice and research settings. The new metric may help rapidly identify problems in glycemic control and may assist with optimizing diabetes therapy during time-constrained physician office visits.

  12. A review of training research and virtual reality simulators for the da Vinci surgical system.

    PubMed

    Liu, May; Curet, Myriam

    2015-01-01

    PHENOMENON: Virtual reality simulators are the subject of several recent studies of skills training for robot-assisted surgery. Yet no consensus exists regarding what a core skill set comprises or how to measure skill performance. Defining a core skill set and relevant metrics would help surgical educators evaluate different simulators. This review draws from published research to propose a core technical skill set for using the da Vinci surgeon console. Publications on three commercial simulators were used to evaluate the simulators' content addressing these skills and associated metrics. An analysis of published research suggests that a core technical skill set for operating the surgeon console includes bimanual wristed manipulation, camera control, master clutching to manage hand position, use of third instrument arm, activating energy sources, appropriate depth perception, and awareness of forces applied by instruments. Validity studies of three commercial virtual reality simulators for robot-assisted surgery suggest that all three have comparable content and metrics. However, none have comprehensive content and metrics for all core skills. INSIGHTS: Virtual reality simulation remains a promising tool to support skill training for robot-assisted surgery, yet existing commercial simulator content is inadequate for performing and assessing a comprehensive basic skill set. The results of this evaluation help identify opportunities and challenges that exist for future developments in virtual reality simulation for robot-assisted surgery. Specifically, the inclusion of educational experts in the development cycle alongside clinical and technological experts is recommended.

  13. Stakeholder requirements for commercially successful wave energy converter farms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babarit, Aurélien; Bull, Diana; Dykes, Katherine

    2017-12-01

    In this study, systems engineering techniques are applied to wave energy to identify and specify stakeholders' requirements for a commercially successful wave energy farm. The focus is on the continental scale utility market. Lifecycle stages and stakeholders are identified. Stakeholders' needs across the whole lifecycle of the wave energy farm are analyzed. A list of 33 stakeholder requirements are identified and specified. This list of requirements should serve as components of a technology performance level metric that could be used by investors and funding agencies to make informed decisions when allocating resources. It is hoped that the technology performance levelmore » metric will accelerate wave energy conversion technology convergence.« less

  14. An On-Line Technology Information System (OTIS) for Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Levri, Julie A.; Boulanger, Richard; Hoganm John A.; Rodriquez, Luis

    2003-01-01

    An On-line Technology Information System (OTIS) is currently being developed for the Advanced Life Support (ALS) Program. This paper describes the preliminary development of OTIS, which is a system designed to provide centralized collection and organization of technology information. The lack of thorough, reliable and easily understood technology information is a major obstacle in effective assessment of technology development progress, trade studies, metric calculations, and technology selection for integrated testing. OTIS will provide a formalized, well-organized protocol to communicate thorough, accurate, current and relevant technology information between the hands-on technology developer and the ALS Community. The need for this type of information transfer system within the Solid Waste Management (SWM) element was recently identified and addressed. A SWM Technology Information Form (TIF) was developed specifically for collecting detailed technology information in the area of SWM. In the TIF, information is requested from SWM technology developers, based upon the Technology Readiness Level (TRL). Basic information is requested for low-TRL technologies, and more detailed information is requested as the TRL of the technology increases. A comparable form is also being developed for the wastewater processing element. In the future, similar forms will also be developed for the ALS elements of air revitalization, food processing, biomass production and thermal control. These ALS element-specific forms will be implemented in OTIS via a web-accessible interface,with the data stored in an object-oriented relational database (created in MySQLTM) located on a secure server at NASA Ames Research Center. With OTIS, ALS element leads and managers will be able to carry out informed research and development investment, thereby promoting technology through the TRL scale. OTIS will also allow analysts to make accurate evaluations of technology options. Additionally, the range and specificity of information solicited will help educate technology developers of programmatic needs.

  15. Understanding Acceptance of Software Metrics--A Developer Perspective

    ERIC Educational Resources Information Center

    Umarji, Medha

    2009-01-01

    Software metrics are measures of software products and processes. Metrics are widely used by software organizations to help manage projects, improve product quality and increase efficiency of the software development process. However, metrics programs tend to have a high failure rate in organizations, and developer pushback is one of the sources…

  16. Towards a physics on fractals: Differential vector calculus in three-dimensional continuum with fractal metric

    NASA Astrophysics Data System (ADS)

    Balankin, Alexander S.; Bory-Reyes, Juan; Shapiro, Michael

    2016-02-01

    One way to deal with physical problems on nowhere differentiable fractals is the mapping of these problems into the corresponding problems for continuum with a proper fractal metric. On this way different definitions of the fractal metric were suggested to account for the essential fractal features. In this work we develop the metric differential vector calculus in a three-dimensional continuum with a non-Euclidean metric. The metric differential forms and Laplacian are introduced, fundamental identities for metric differential operators are established and integral theorems are proved by employing the metric version of the quaternionic analysis for the Moisil-Teodoresco operator, which has been introduced and partially developed in this paper. The relations between the metric and conventional operators are revealed. It should be emphasized that the metric vector calculus developed in this work provides a comprehensive mathematical formalism for the continuum with any suitable definition of fractal metric. This offers a novel tool to study physics on fractals.

  17. Routing and Scheduling Algorithms for WirelessHART Networks: A Survey

    PubMed Central

    Nobre, Marcelo; Silva, Ivanovitch; Guedes, Luiz Affonso

    2015-01-01

    Wireless communication is a trend nowadays for the industrial environment. A number of different technologies have emerged as solutions satisfying strict industrial requirements (e.g., WirelessHART, ISA100.11a, WIA-PA). As the industrial environment presents a vast range of applications, adopting an adequate solution for each case is vital to obtain good performance of the system. In this context, the routing and scheduling schemes associated with these technologies have a direct impact on important features, like latency and energy consumption. This situation has led to the development of a vast number of routing and scheduling schemes. In the present paper, we focus on the WirelessHART technology, emphasizing its most important routing and scheduling aspects in order to guide both end users and the developers of new algorithms. Furthermore, we provide a detailed literature review of the newest routing and scheduling techniques for WirelessHART, discussing each of their features. These routing algorithms have been evaluated in terms of their objectives, metrics, the usage of the WirelessHART structures and validation method. In addition, the scheduling algorithms were also evaluated by metrics, validation, objectives and, in addition, by multiple superframe support, as well as by the redundancy method used. Moreover, this paper briefly presents some insights into the main WirelessHART simulation modules available, in order to provide viable test platforms for the routing and scheduling algorithms. Finally, some open issues in WirelessHART routing and scheduling algorithms are discussed. PMID:25919371

  18. The Applicability of Proposed Object-Oriented Metrics to Developer Feedback in Time to Impact Development

    NASA Technical Reports Server (NTRS)

    Neal, Ralph D.

    1996-01-01

    This paper looks closely at each of the software metrics generated by the McCabe object-Oriented Tool(TM) and its ability to convey timely information to developers. The metrics are examined for meaningfulness in terms of the scale assignable to the metric by the rules of measurement theory and the software dimension being measured. Recommendations are made as to the proper use of each metric and its ability to influence development at an early stage. The metrics of the McCabe Object-Oriented Tool(TM) set were selected because of the tool's use in a couple of NASA IV&V projects.

  19. Establishing benchmarks and metrics for disruptive technologies, inappropriate and obsolete tests in the clinical laboratory.

    PubMed

    Kiechle, Frederick L; Arcenas, Rodney C; Rogers, Linda C

    2014-01-01

    Benchmarks and metrics related to laboratory test utilization are based on evidence-based medical literature that may suffer from a positive publication bias. Guidelines are only as good as the data reviewed to create them. Disruptive technologies require time for appropriate use to be established before utilization review will be meaningful. Metrics include monitoring the use of obsolete tests and the inappropriate use of lab tests. Test utilization by clients in a hospital outreach program can be used to monitor the impact of new clients on lab workload. A multi-disciplinary laboratory utilization committee is the most effective tool for modifying bad habits, and reviewing and approving new tests for the lab formulary or by sending them out to a reference lab. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. NASA metric transition plan

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA science publications have used the metric system of measurement since 1970. Although NASA has maintained a metric use policy since 1979, practical constraints have restricted actual use of metric units. In 1988, an amendment to the Metric Conversion Act of 1975 required the Federal Government to adopt the metric system except where impractical. In response to Public Law 100-418 and Executive Order 12770, NASA revised its metric use policy and developed this Metric Transition Plan. NASA's goal is to use the metric system for program development and functional support activities to the greatest practical extent by the end of 1995. The introduction of the metric system into new flight programs will determine the pace of the metric transition. Transition of institutional capabilities and support functions will be phased to enable use of the metric system in flight program development and operations. Externally oriented elements of this plan will introduce and actively support use of the metric system in education, public information, and small business programs. The plan also establishes a procedure for evaluating and approving waivers and exceptions to the required use of the metric system for new programs. Coordination with other Federal agencies and departments (through the Interagency Council on Metric Policy) and industry (directly and through professional societies and interest groups) will identify sources of external support and minimize duplication of effort.

  1. Office of Space Science: Integrated technology strategy

    NASA Technical Reports Server (NTRS)

    Huntress, Wesley T., Jr.; Reck, Gregory M.

    1994-01-01

    This document outlines the strategy by which the Office of Space Science, in collaboration with the Office of Advanced Concepts and Technology and the Office of Space Communications, will meet the challenge of the national technology thrust. The document: highlights the legislative framework within which OSS must operate; evaluates the relationship between OSS and its principal stakeholders; outlines a vision of a successful OSS integrated technology strategy; establishes four goals in support of this vision; provides an assessment of how OSS is currently positioned to respond to the goals; formulates strategic objectives to meet the goals; introduces policies for implementing the strategy; and identifies metrics for measuring success. The OSS Integrated Technology Strategy establishes the framework through which OSS will satisfy stakeholder expectations by teaming with partners in NASA and industry to develop the critical technologies required to: enhance space exploration, expand our knowledge of the universe, and ensure continued national scientific, technical and economic leadership.

  2. Technology Alignment and Portfolio Prioritization (TAPP): Advanced Methods in Strategic Analysis, Technology Forecasting and Long Term Planning for Human Exploration and Operations, Advanced Exploration Systems and Advanced Concepts

    NASA Technical Reports Server (NTRS)

    Funaro, Gregory V.; Alexander, Reginald A.

    2015-01-01

    The Advanced Concepts Office (ACO) at NASA, Marshall Space Flight Center is expanding its current technology assessment methodologies. ACO is developing a framework called TAPP that uses a variety of methods, such as association mining and rule learning from data mining, structure development using a Technological Innovation System (TIS), and social network modeling to measure structural relationships. The role of ACO is to 1) produce a broad spectrum of ideas and alternatives for a variety of NASA's missions, 2) determine mission architecture feasibility and appropriateness to NASA's strategic plans, and 3) define a project in enough detail to establish an initial baseline capable of meeting mission objectives ACO's role supports the decision­-making process associated with the maturation of concepts for traveling through, living in, and understanding space. ACO performs concept studies and technology assessments to determine the degree of alignment between mission objectives and new technologies. The first step in technology assessment is to identify the current technology maturity in terms of a technology readiness level (TRL). The second step is to determine the difficulty associated with advancing a technology from one state to the next state. NASA has used TRLs since 1970 and ACO formalized them in 1995. The DoD, ESA, Oil & Gas, and DoE have adopted TRLs as a means to assess technology maturity. However, "with the emergence of more complex systems and system of systems, it has been increasingly recognized that TRL assessments have limitations, especially when considering [the] integration of complex systems." When performing the second step in a technology assessment, NASA requires that an Advancement Degree of Difficulty (AD2) method be utilized. NASA has used and developed or used a variety of methods to perform this step: Expert Opinion or Delphi Approach, Value Engineering or Value Stream, Analytical Hierarchy Process (AHP), Technique for the Order of Prioritization by Similarity to Ideal Solution (TOPSIS), and other multi­-criteria decision-making methods. These methods can be labor-intensive, often contain cognitive or parochial bias, and do not consider the competing prioritization between mission architectures. Strategic Decision-Making (SDM) processes cannot be properly understood unless the context of the technology is understood. This makes assessing technological change particularly challenging due to the relationships "between incumbent technology and the incumbent (innovation) system in relation to the emerging technology and the emerging innovation system." The central idea in technology dynamics is to consider all activities that contribute to the development, diffusion, and use of innovations as system functions. Bergek defines system functions within a TIS to address what is actually happening and has a direct influence on the ultimate performance of the system and technology development. ACO uses similar metrics and is expanding these metrics to account for the structure and context of the technology. At NASA technology and strategy is strongly interrelated. NASA's Strategic Space Technology Investment Plan (SSTIP) prioritizes those technologies essential to the pursuit of NASA's missions and national interests. The SSTIP is strongly coupled with NASA's Technology Roadmaps to provide investment guidance during the next four years, within a twenty-year horizon. This paper discusses the methods ACO is currently developing to better perform technology assessments while taking into consideration Strategic Alignment, Technology Forecasting, and Long Term Planning.

  3. Stretched Lens Array Squarerigger (SLASR) Technology Maturation

    NASA Technical Reports Server (NTRS)

    O'Neill, Mark; McDanal, A.J.; Howell, Joe; Lollar, Louis; Carrington, Connie; Hoppe, David; Piszczor, Michael; Suszuki, Nantel; Eskenazi, Michael; Aiken, Dan; hide

    2007-01-01

    Since April 2005, our team has been underway on a competitively awarded program sponsored by NASA s Exploration Systems Mission Directorate to develop, refine, and mature the unique solar array technology known as Stretched Lens Array SquareRigger (SLASR). SLASR offers an unprecedented portfolio of performance metrics, SLASR offers an unprecedented portfolio of performance metrics, including the following: Areal Power Density = 300 W/m2 (2005) - 400 W/m2 (2008 Target) Specific Power = 300 W/kg (2005) - 500 W/kg (2008 Target) for a Full 100 kW Solar Array Stowed Power = 80 kW/cu m (2005) - 120 kW/m3 (2008 Target) for a Full 100 kW Solar Array Scalable Array Capacity = 100 s of W s to 100 s of kW s Super-Insulated Small Cell Circuit = High-Voltage (300-600 V) Operation at Low Mass Penalty Super-Shielded Small Cell Circuit = Excellent Radiation Hardness at Low Mass Penalty 85% Cell Area Savings = 75% Lower Array Cost per Watt than One-Sun Array Modular, Scalable, & Mass-Producible at MW s per Year Using Existing Processes and Capacities

  4. An Examination of Selected Software Testing Tools: 1992

    DTIC Science & Technology

    1992-12-01

    Report ....................................................... 27-19 Figure 27-17. Metrics Manager Database Full Report...historical test database , the test management and problem reporting tools were examined using the sample test database provided by each supplier. 4-4...track the impact of new methods, organi- zational structures, and technologies. Metrics Manager is supported by an industry database that allows

  5. Xenon Acquisition Strategies for High-Power Electric Propulsion NASA Missions

    NASA Technical Reports Server (NTRS)

    Herman, Daniel A.; Unfried, Kenneth G.

    2015-01-01

    The benefits of high-power solar electric propulsion (SEP) for both NASA's human and science exploration missions combined with the technology investment from the Space Technology Mission Directorate have enabled the development of a 50kW-class SEP mission. NASA mission concepts developed, including the Asteroid Redirect Robotic Mission, and those proposed by contracted efforts for the 30kW-class demonstration have a range of xenon propellant loads from 100's of kg up to 10,000 kg. A xenon propellant load of 10 metric tons represents greater than 10% of the global annual production rate of xenon. A single procurement of this size with short-term delivery can disrupt the xenon market, driving up pricing, making the propellant costs for the mission prohibitive. This paper examines the status of the xenon industry worldwide, including historical xenon supply and pricing. The paper discusses approaches for acquiring on the order of 10 MT of xenon propellant considering realistic programmatic constraints to support potential near-term NASA missions. Finally, the paper will discuss acquisitions strategies for mission campaigns utilizing multiple high-power solar electric propulsion vehicles requiring 100's of metric tons of xenon over an extended period of time where a longer term acquisition approach could be implemented.

  6. A Survey of Titan Balloon Concepts and Technology Status

    NASA Technical Reports Server (NTRS)

    Hall, Jeffery L.

    2011-01-01

    This paper surveys the options for, and technology status of, balloon vehicles to explore Saturn's moon Titan. A significant amount of Titan balloon concept thinking and technology development has been performed in recent years, particularly following the spectacular results from the descent and landing of the Huygens probe and remote sensing observations by the Cassini spacecraft. There is widespread recognition that a balloon vehicle on the next Titan mission could provide an outstanding and unmatched capability for in situ exploration on a global scale. The rich variety of revealed science targets has combined with a highly favorable Titan flight environment to yield a wide diversity of proposed balloon concepts. The paper presents a conceptual framework for thinking about balloon vehicle design choices and uses it to analyze various Titan options. The result is a list of recommended Titan balloon vehicle concepts that could perform a variety of science missions, along with their projected performance metrics. Recent technology developments for these balloon concepts are discussed to provide context for an assessment of outstanding risk areas and technological maturity. The paper concludes with suggestions for technology investments needed to achieve flight readiness.

  7. Free-living monitoring of Parkinson's disease: Lessons from the field.

    PubMed

    Del Din, Silvia; Godfrey, Alan; Mazzà, Claudia; Lord, Sue; Rochester, Lynn

    2016-09-01

    Wearable technology comprises miniaturized sensors (eg, accelerometers) worn on the body and/or paired with mobile devices (eg, smart phones) allowing continuous patient monitoring in unsupervised, habitual environments (termed free-living). Wearable technologies are revolutionizing approaches to health care as a result of their utility, accessibility, and affordability. They are positioned to transform Parkinson's disease (PD) management through the provision of individualized, comprehensive, and representative data. This is particularly relevant in PD where symptoms are often triggered by task and free-living environmental challenges that cannot be replicated with sufficient veracity elsewhere. This review concerns use of wearable technology in free-living environments for people with PD. It outlines the potential advantages of wearable technologies and evidence for these to accurately detect and measure clinically relevant features including motor symptoms, falls risk, freezing of gait, gait, functional mobility, and physical activity. Technological limitations and challenges are highlighted, and advances concerning broader aspects are discussed. Recommendations to overcome key challenges are made. To date there is no fully validated system to monitor clinical features or activities in free-living environments. Robust accuracy and validity metrics for some features have been reported, and wearable technology may be used in these cases with a degree of confidence. Utility and acceptability appears reasonable, although testing has largely been informal. Key recommendations include adopting a multidisciplinary approach for standardizing definitions, protocols, and outcomes. Robust validation of developed algorithms and sensor-based metrics is required along with testing of utility. These advances are required before widespread clinical adoption of wearable technology can be realized. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  8. Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.

  9. Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes.

    PubMed

    Yates, Katherine L; Mellin, Camille; Caley, M Julian; Radford, Ben T; Meeuwig, Jessica J

    2016-01-01

    Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are not fully captured by remotely sensed data. As such, the use of remotely sensed data to model biodiversity represents a compromise between model performance and data availability.

  10. Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes

    PubMed Central

    Yates, Katherine L.; Mellin, Camille; Caley, M. Julian; Radford, Ben T.; Meeuwig, Jessica J.

    2016-01-01

    Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are not fully captured by remotely sensed data. As such, the use of remotely sensed data to model biodiversity represents a compromise between model performance and data availability. PMID:27333202

  11. The Bird Community Resilience Index: a novel remote sensing-based biodiversity variable for quantifying ecological integrity

    NASA Astrophysics Data System (ADS)

    Michel, N. L.; Wilsey, C.; Burkhalter, C.; Trusty, B.; Langham, G.

    2017-12-01

    Scalable indicators of biodiversity change are critical to reporting overall progress towards national and global targets for biodiversity conservation (e.g. Aichi Targets) and sustainable development (SDGs). These essential biodiversity variables capitalize on new remote sensing technologies and growth of community science participation. Here we present a novel biodiversity metric quantifying resilience of bird communities and, by extension, of their associated ecological communities. This metric adds breadth to the community composition class of essential biodiversity variables that track trends in condition and vulnerability of ecological communities. We developed this index for use with North American grassland birds, a guild that has experienced stronger population declines than any other avian guild, in order to evaluate gains from the implementation of best management practices on private lands. The Bird Community Resilience Index was designed to incorporate the full suite of species-specific responses to management actions, and be flexible enough to work across broad climatic, land cover, and bird community gradients (i.e., grasslands from northern Mexico through Canada). The Bird Community Resilience Index consists of four components: density estimates of grassland and arid land birds; weighting based on conservation need; a functional diversity metric to incorporate resiliency of bird communities and their ecosystems; and a standardized scoring system to control for interannual variation caused by extrinsic factors (e.g., climate). We present an analysis of bird community resilience across ranches in the Northern Great Plains region of the United States. As predicted, Bird Community Resilience was higher in lands implementing best management practices than elsewhere. While developed for grassland birds, this metric holds great potential for use as an Essential Biodiversity Variable for community composition in a variety of habitat.

  12. Generality in nanotechnologies and its relationship to economic performance

    NASA Astrophysics Data System (ADS)

    Gomez Baquero, Fernando

    In the history if economic analysis there is perhaps no more important question than the one of how economic development is achieved. For more than a century, economists have explored the role of technology in economic growth but there is still much to be learned about the effect that technologies, in particular emerging ones, have on economic growth and productivity. The objective of this research is to understand the relationship between nanotechnologies and economic growth and productivity, using the theory of General Purpose Technology (GPT)-driven economic growth. To do so, the Generality Index (calculated from patent data) was used to understand the relative pervasiveness of nanotechnologies. The analysis of trends and patterns of Generality Index, using the largest group of patents since the publication of the NBER Patent Database, indicates that nanotechnologies possess a higher average Generality than other technological groups. Next, the relationship between the Generality Index and Total Factor Productivity (TFP) was studied using econometric analysis. Model estimates indicate that the variation in Generality for the group of nanotechnologies can explain a large proportion of the variation in TFP. However, the explanatory power of the entire set of patents (not just nanotechnologies) is larger and corresponds better to the expected theoretical models. Additionally, there is a negative short-run relationship between Generality and TFP, conflicting with part of the theoretical GPT-models. Finally, the relationship between the Generality of nanotechnologies and policy-driven investment events, such as R&D investments and grant awards, was studied using econometric methods. The statistical evidence suggests that NSF awards are related to technologies with higher Generality, while NIH awards and NNI investments are related to a lower average Generality. Overall, results of this research work indicate that the introduction of pervasive technologies into an economic system sets in motion an interesting series of events that can both increase and decrease productivity and therefore economic growth. The metrics and methods developed in this work emphasize the importance of developing and using new metrics for strategic decision making, both in the private sector and in the public sector.

  13. Xenon Acquisition Strategies for High-Power Electric Propulsion NASA Missions

    NASA Technical Reports Server (NTRS)

    Herman, Daniel A.; Unfried, Kenneth G.

    2015-01-01

    Solar electric propulsion (SEP) has been used for station-keeping of geostationary communications satellites since the 1980s. Solar electric propulsion has also benefitted from success on NASA Science Missions such as Deep Space One and Dawn. The xenon propellant loads for these applications have been in the 100s of kilograms range. Recent studies performed for NASA's Human Exploration and Operations Mission Directorate (HEOMD) have demonstrated that SEP is critically enabling for both near-term and future exploration architectures. The high payoff for both human and science exploration missions and technology investment from NASA's Space Technology Mission Directorate (STMD) are providing the necessary convergence and impetus for a 30-kilowatt-class SEP mission. Multiple 30-50- kilowatt Solar Electric Propulsion Technology Demonstration Mission (SEP TDM) concepts have been developed based on the maturing electric propulsion and solar array technologies by STMD with recent efforts focusing on an Asteroid Redirect Robotic Mission (ARRM). Xenon is the optimal propellant for the existing state-of-the-art electric propulsion systems considering efficiency, storability, and contamination potential. NASA mission concepts developed and those proposed by contracted efforts for the 30-kilowatt-class demonstration have a range of xenon propellant loads from 100s of kilograms up to 10,000 kilograms. This paper examines the status of the xenon industry worldwide, including historical xenon supply and pricing. The paper will provide updated information on the xenon market relative to previous papers that discussed xenon production relative to NASA mission needs. The paper will discuss the various approaches for acquiring on the order of 10 metric tons of xenon propellant to support potential near-term NASA missions. Finally, the paper will discuss acquisitions strategies for larger NASA missions requiring 100s of metric tons of xenon will be discussed.

  14. Metrication study for large space telescope

    NASA Technical Reports Server (NTRS)

    Creswick, F. A.; Weller, A. E.

    1973-01-01

    Various approaches which could be taken in developing a metric-system design for the Large Space Telescope, considering potential penalties on development cost and time, commonality with other satellite programs, and contribution to national goals for conversion to the metric system of units were investigated. Information on the problems, potential approaches, and impacts of metrication was collected from published reports on previous aerospace-industry metrication-impact studies and through numerous telephone interviews. The recommended approach to LST metrication formulated in this study cells for new components and subsystems to be designed in metric-module dimensions, but U.S. customary practice is allowed where U.S. metric standards and metric components are not available or would be unsuitable. Electrical/electronic-system design, which is presently largely metric, is considered exempt from futher metrication. An important guideline is that metric design and fabrication should in no way compromise the effectiveness of the LST equipment.

  15. Quantitative imaging biomarkers: a review of statistical methods for technical performance assessment.

    PubMed

    Raunig, David L; McShane, Lisa M; Pennello, Gene; Gatsonis, Constantine; Carson, Paul L; Voyvodic, James T; Wahl, Richard L; Kurland, Brenda F; Schwarz, Adam J; Gönen, Mithat; Zahlmann, Gudrun; Kondratovich, Marina V; O'Donnell, Kevin; Petrick, Nicholas; Cole, Patricia E; Garra, Brian; Sullivan, Daniel C

    2015-02-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of technical performance analysis methods, metrics, and study designs that provide terminology, metrics, and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of quantitative imaging biomarker performance studies so that results from multiple studies can be compared, contrasted, or combined. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  16. Impact of pretreatment and downstream processing technologies on economics and energy in cellulosic ethanol production.

    PubMed

    Kumar, Deepak; Murthy, Ganti S

    2011-09-05

    While advantages of biofuel have been widely reported, studies also highlight the challenges in large scale production of biofuel. Cost of ethanol and process energy use in cellulosic ethanol plants are dependent on technologies used for conversion of feedstock. Process modeling can aid in identifying techno-economic bottlenecks in a production process. A comprehensive techno-economic analysis was performed for conversion of cellulosic feedstock to ethanol using some of the common pretreatment technologies: dilute acid, dilute alkali, hot water and steam explosion. Detailed process models incorporating feedstock handling, pretreatment, simultaneous saccharification and co-fermentation, ethanol recovery and downstream processing were developed using SuperPro Designer. Tall Fescue (Festuca arundinacea Schreb) was used as a model feedstock. Projected ethanol yields were 252.62, 255.80, 255.27 and 230.23 L/dry metric ton biomass for conversion process using dilute acid, dilute alkali, hot water and steam explosion pretreatment technologies respectively. Price of feedstock and cellulose enzymes were assumed as $50/metric ton and 0.517/kg broth (10% protein in broth, 600 FPU/g protein) respectively. Capital cost of ethanol plants processing 250,000 metric tons of feedstock/year was $1.92, $1.73, $1.72 and $1.70/L ethanol for process using dilute acid, dilute alkali, hot water and steam explosion pretreatment respectively. Ethanol production cost of $0.83, $0.88, $0.81 and $0.85/L ethanol was estimated for production process using dilute acid, dilute alkali, hot water and steam explosion pretreatment respectively. Water use in the production process using dilute acid, dilute alkali, hot water and steam explosion pretreatment was estimated 5.96, 6.07, 5.84 and 4.36 kg/L ethanol respectively. Ethanol price and energy use were highly dependent on process conditions used in the ethanol production plant. Potential for significant ethanol cost reductions exist in increasing pentose fermentation efficiency and reducing biomass and enzyme costs. The results demonstrated the importance of addressing the tradeoffs in capital costs, pretreatment and downstream processing technologies.

  17. Impact of pretreatment and downstream processing technologies on economics and energy in cellulosic ethanol production

    PubMed Central

    2011-01-01

    Background While advantages of biofuel have been widely reported, studies also highlight the challenges in large scale production of biofuel. Cost of ethanol and process energy use in cellulosic ethanol plants are dependent on technologies used for conversion of feedstock. Process modeling can aid in identifying techno-economic bottlenecks in a production process. A comprehensive techno-economic analysis was performed for conversion of cellulosic feedstock to ethanol using some of the common pretreatment technologies: dilute acid, dilute alkali, hot water and steam explosion. Detailed process models incorporating feedstock handling, pretreatment, simultaneous saccharification and co-fermentation, ethanol recovery and downstream processing were developed using SuperPro Designer. Tall Fescue (Festuca arundinacea Schreb) was used as a model feedstock. Results Projected ethanol yields were 252.62, 255.80, 255.27 and 230.23 L/dry metric ton biomass for conversion process using dilute acid, dilute alkali, hot water and steam explosion pretreatment technologies respectively. Price of feedstock and cellulose enzymes were assumed as $50/metric ton and 0.517/kg broth (10% protein in broth, 600 FPU/g protein) respectively. Capital cost of ethanol plants processing 250,000 metric tons of feedstock/year was $1.92, $1.73, $1.72 and $1.70/L ethanol for process using dilute acid, dilute alkali, hot water and steam explosion pretreatment respectively. Ethanol production cost of $0.83, $0.88, $0.81 and $0.85/L ethanol was estimated for production process using dilute acid, dilute alkali, hot water and steam explosion pretreatment respectively. Water use in the production process using dilute acid, dilute alkali, hot water and steam explosion pretreatment was estimated 5.96, 6.07, 5.84 and 4.36 kg/L ethanol respectively. Conclusions Ethanol price and energy use were highly dependent on process conditions used in the ethanol production plant. Potential for significant ethanol cost reductions exist in increasing pentose fermentation efficiency and reducing biomass and enzyme costs. The results demonstrated the importance of addressing the tradeoffs in capital costs, pretreatment and downstream processing technologies. PMID:21892958

  18. Metrication report to the Congress. 1991 activities and 1992 plans

    NASA Technical Reports Server (NTRS)

    1991-01-01

    During 1991, NASA approved a revised metric use policy and developed a NASA Metric Transition Plan. This Plan targets the end of 1995 for completion of NASA's metric initiatives. This Plan also identifies future programs that NASA anticipates will use the metric system of measurement. Field installations began metric transition studies in 1991 and will complete them in 1992. Half of NASA's Space Shuttle payloads for 1991, and almost all such payloads for 1992, have some metric-based elements. In 1992, NASA will begin assessing requirements for space-quality piece parts fabricated to U.S. metric standards, leading to development and qualification of high priority parts.

  19. Integrating automated support for a software management cycle into the TAME system

    NASA Technical Reports Server (NTRS)

    Sunazuka, Toshihiko; Basili, Victor R.

    1989-01-01

    Software managers are interested in the quantitative management of software quality, cost and progress. An integrated software management methodology, which can be applied throughout the software life cycle for any number purposes, is required. The TAME (Tailoring A Measurement Environment) methodology is based on the improvement paradigm and the goal/question/metric (GQM) paradigm. This methodology helps generate a software engineering process and measurement environment based on the project characteristics. The SQMAR (software quality measurement and assurance technology) is a software quality metric system and methodology applied to the development processes. It is based on the feed forward control principle. Quality target setting is carried out before the plan-do-check-action activities are performed. These methodologies are integrated to realize goal oriented measurement, process control and visual management. A metric setting procedure based on the GQM paradigm, a management system called the software management cycle (SMC), and its application to a case study based on NASA/SEL data are discussed. The expected effects of SMC are quality improvement, managerial cost reduction, accumulation and reuse of experience, and a highly visual management reporting system.

  20. Probabilistic Causal Analysis for System Safety Risk Assessments in Commercial Air Transport

    NASA Technical Reports Server (NTRS)

    Luxhoj, James T.

    2003-01-01

    Aviation is one of the critical modes of our national transportation system. As such, it is essential that new technologies be continually developed to ensure that a safe mode of transportation becomes even safer in the future. The NASA Aviation Safety Program (AvSP) is managing the development of new technologies and interventions aimed at reducing the fatal aviation accident rate by a factor of 5 by year 2007 and by a factor of 10 by year 2022. A portfolio assessment is currently being conducted to determine the projected impact that the new technologies and/or interventions may have on reducing aviation safety system risk. This paper reports on advanced risk analytics that combine the use of a human error taxonomy, probabilistic Bayesian Belief Networks, and case-based scenarios to assess a relative risk intensity metric. A sample case is used for illustrative purposes.

  1. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  2. Economics of polysilicon processes

    NASA Technical Reports Server (NTRS)

    Yaws, C. L.; Li, K. Y.; Chou, S. M.

    1986-01-01

    Techniques are being developed to provide lower cost polysilicon material for solar cells. Existing technology which normally provides semiconductor industry polysilicon material is undergoing changes and also being used to provide polysilicon material for solar cells. Economics of new and existing technologies are presented for producing polysilicon. The economics are primarily based on the preliminary process design of a plant producing 1,000 metric tons/year of silicon. The polysilicon processes include: Siemen's process (hydrogen reduction of trichlorosilane); Union Carbide process (silane decomposition); and Hemlock Semiconductor process (hydrogen reduction of dichlorosilane). The economics include cost estimates of capital investment and product cost to produce polysilicon via the technology. Sensitivity analysis results are also presented to disclose the effect of major paramentes such as utilities, labor, raw materials and capital investment.

  3. Future manned systems advanced avionics study

    NASA Technical Reports Server (NTRS)

    Sawamura, Bob; Radke, Kathie

    1992-01-01

    COTS+ was defined in this study as commercial off-the-shelf (COTS) products, ruggedized and militarized components, and COTS technology. This study cites the benefits of integrating COTS+ in space, postulates a COTS+ integration methodology, and develops requirements and an architecture to achieve integration. Developmental needs and concerns were identified throughout the study; these needs, concerns, and recommendations relative to their abatement are subsequently presented for further action and study. The COTS+ concept appears workable in part or in totality. No COTS+ technology gaps were identified; however, radiation tolerance was cited as a concern, and the deferred maintenance issue resurfaced. Further study is recommended to explore COTS+ cost-effectiveness, maintenance philosophy, needs, concerns, and utility metrics. The generation of a development plan to further investigate and integrate COTS+ technology is recommended. A COTS+ transitional integration program is recommended. Sponsoring and establishing technology maturation programs and COTS+ engineering and standards committees are deemed necessary and are recommended for furthering COTS+ integration in space.

  4. A Synthetic Vision Preliminary Integrated Safety Analysis

    NASA Technical Reports Server (NTRS)

    Hemm, Robert; Houser, Scott

    2001-01-01

    This report documents efforts to analyze a sample of aviation safety programs, using the LMI-developed integrated safety analysis tool to determine the change in system risk resulting from Aviation Safety Program (AvSP) technology implementation. Specifically, we have worked to modify existing system safety tools to address the safety impact of synthetic vision (SV) technology. Safety metrics include reliability, availability, and resultant hazard. This analysis of SV technology is intended to be part of a larger effort to develop a model that is capable of "providing further support to the product design and development team as additional information becomes available". The reliability analysis portion of the effort is complete and is fully documented in this report. The simulation analysis is still underway; it will be documented in a subsequent report. The specific goal of this effort is to apply the integrated safety analysis to SV technology. This report also contains a brief discussion of data necessary to expand the human performance capability of the model, as well as a discussion of human behavior and its implications for system risk assessment in this modeling environment.

  5. Development of Thermal Protection Materials for Future Mars Entry, Descent and Landing Systems

    NASA Technical Reports Server (NTRS)

    Cassell, Alan M.; Beck, Robin A. S.; Arnold, James O.; Hwang, Helen; Wright, Michael J.; Szalai, Christine E.; Blosser, Max; Poteet, Carl C.

    2010-01-01

    Entry Systems will play a crucial role as NASA develops the technologies required for Human Mars Exploration. The Exploration Technology Development Program Office established the Entry, Descent and Landing (EDL) Technology Development Project to develop Thermal Protection System (TPS) materials for insertion into future Mars Entry Systems. An assessment of current entry system technologies identified significant opportunity to improve the current state of the art in thermal protection materials in order to enable landing of heavy mass (40 mT) payloads. To accomplish this goal, the EDL Project has outlined a framework to define, develop and model the thermal protection system material concepts required to allow for the human exploration of Mars via aerocapture followed by entry. Two primary classes of ablative materials are being developed: rigid and flexible. The rigid ablatives will be applied to the acreage of a 10x30 m rigid mid L/D Aeroshell to endure the dual pulse heating (peak approx.500 W/sq cm). Likewise, flexible ablative materials are being developed for 20-30 m diameter deployable aerodynamic decelerator entry systems that could endure dual pulse heating (peak aprrox.120 W/sq cm). A technology Roadmap is presented that will be used for facilitating the maturation of both the rigid and flexible ablative materials through application of decision metrics (requirements, key performance parameters, TRL definitions, and evaluation criteria) used to assess and advance the various candidate TPS material technologies.

  6. Metrication report to the Congress

    NASA Technical Reports Server (NTRS)

    1991-01-01

    NASA's principal metrication accomplishments for FY 1990 were establishment of metrication policy for major programs, development of an implementing instruction for overall metric policy and initiation of metrication planning for the major program offices. In FY 1991, development of an overall NASA plan and individual program office plans will be completed, requirement assessments will be performed for all support areas, and detailed assessment and transition planning will be undertaken at the institutional level. Metric feasibility decisions on a number of major programs are expected over the next 18 months.

  7. Metrication report to the Congress

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The principal NASA metrication activities for FY 1989 were a revision of NASA metric policy and evaluation of the impact of using the metric system of measurement for the design and construction of the Space Station Freedom. Additional studies provided a basis for focusing follow-on activity. In FY 1990, emphasis will shift to implementation of metric policy and development of a long-range metrication plan. The report which follows addresses Policy Development, Planning and Program Evaluation, and Supporting Activities for the past and coming year.

  8. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  9. Maintenance Metrics for Jovial (J73) Software

    DTIC Science & Technology

    1988-12-01

    pacing technology in advanced fighters, just as it has in most other weapon systems and information systems" ( Canan , 1986:49). Another reason for...the magnitude of the software inside an aircraft may represent only a fraction of that aircraft’s total software requirement." ( Canan , 1986:49) One more...art than a science" marks program development as a largely labor-intensive, human endeavor ( Canan , 1986:50). Individual effort and creativity therefore

  10. Early Warning Look Ahead Metrics: The Percent Milestone Backlog Metric

    NASA Technical Reports Server (NTRS)

    Shinn, Stephen A.; Anderson, Timothy P.

    2017-01-01

    All complex development projects experience delays and corresponding backlogs of their project control milestones during their acquisition lifecycles. NASA Goddard Space Flight Center (GSFC) Flight Projects Directorate (FPD) teamed with The Aerospace Corporation (Aerospace) to develop a collection of Early Warning Look Ahead metrics that would provide GSFC leadership with some independent indication of the programmatic health of GSFC flight projects. As part of the collection of Early Warning Look Ahead metrics, the Percent Milestone Backlog metric is particularly revealing, and has utility as a stand-alone execution performance monitoring tool. This paper describes the purpose, development methodology, and utility of the Percent Milestone Backlog metric. The other four Early Warning Look Ahead metrics are also briefly discussed. Finally, an example of the use of the Percent Milestone Backlog metric in providing actionable insight is described, along with examples of its potential use in other commodities.

  11. Rapid Response Risk Assessment in New Project Development

    NASA Technical Reports Server (NTRS)

    Graber, Robert R.

    2010-01-01

    A capability for rapidly performing quantitative risk assessments has been developed by JSC Safety and Mission Assurance for use on project design trade studies early in the project life cycle, i.e., concept development through preliminary design phases. A risk assessment tool set has been developed consisting of interactive and integrated software modules that allow a user/project designer to assess the impact of alternative design or programmatic options on the probability of mission success or other risk metrics. The risk and design trade space includes interactive options for selecting parameters and/or metrics for numerous design characteristics including component reliability characteristics, functional redundancy levels, item or system technology readiness levels, and mission event characteristics. This capability is intended for use on any project or system development with a defined mission, and an example project will used for demonstration and descriptive purposes, e.g., landing a robot on the moon. The effects of various alternative design considerations and their impact of these decisions on mission success (or failure) can be measured in real time on a personal computer. This capability provides a high degree of efficiency for quickly providing information in NASA s evolving risk-based decision environment

  12. Metrication in a global environment

    NASA Technical Reports Server (NTRS)

    Aberg, J.

    1994-01-01

    A brief history about the development of the metric system of measurement is given. The need for the U.S. to implement the 'SI' metric system in the international markets, especially in the aerospace and general trade, is discussed. Development of metric implementation and experiences locally, nationally, and internationally are included.

  13. Performance specification methodology: introduction and application to displays

    NASA Astrophysics Data System (ADS)

    Hopper, Darrel G.

    1998-09-01

    Acquisition reform is based on the notion that DoD must rely on the commercial marketplace insofar as possible rather than solely looking inward to a military marketplace to meet its needs. This reform forces a fundamental change in the way DoD conducts business, including a heavy reliance on private sector models of change. The key to more reliance on the commercial marketplace is the performance specifications (PS). This paper introduces some PS concepts and a PS classification principal to help bring some structure to the analysis of risk (cost, schedule, capability) in weapons system development and the management of opportunities for affordable ownership (maintain/increase capability via technology insertion, reduce cost) in this new paradigm. The DoD shift toward commercial components is nowhere better exemplified than in displays. Displays are the quintessential dual-use technology and are used herein to exemplify these PS concepts and principal. The advent of flat panel displays as a successful technology is setting off an epochal shift in cockpits and other military applications. Displays are installed in every DoD weapon system, and are, thus, representative of a range of technologies where issues and concerns throughout industry and government have been raised regarding the increased DoD reliance on the commercial marketplace. Performance specifications require metrics: the overall metrics of 'information-thrust' with units of Mb/s and 'specific info- thrust' with units of Mb/s/kg are introduced to analyze value of a display to the warfighter and affordability to the taxpayer.

  14. Technology Acceptance among Pre-Service Teachers: Does Gender Matter?

    ERIC Educational Resources Information Center

    Teo, Timothy; Fan, Xitao; Du, Jianxia

    2015-01-01

    This study examined possible gender differences in pre-service teachers' perceived acceptance of technology in their professional work under the framework of the technology acceptance model (TAM). Based on a sample of pre-service teachers, a series of progressively more stringent measurement invariance tests (configural, metric, and scalar…

  15. The MERMAID project

    NASA Technical Reports Server (NTRS)

    Cowderoy, A. J. C.; Jenkins, John O.; Poulymenakou, A

    1992-01-01

    The tendency for software development projects to be completed over schedule and over budget was documented extensively. Additionally many projects are completed within budgetary and schedule target only as a result of the customer agreeing to accept reduced functionality. In his classic book, The Mythical Man Month, Fred Brooks exposes the fallacy that effort and schedule are freely interchangeable. All current cost models are produced on the assumption that there is very limited scope for schedule compression unless there is a corresponding reduction in delivered functionality. The Metrication and Resources Modeling Aid (MERMAID) project, partially financed by the Commission of the European Communities (CEC) as Project 2046 began in Oct. 1988 and its goal were as follows: (1) improvement of understanding of the relationships between software development productivity and product and process metrics; (2) to facilitate the widespread technology transfer from the Consortium to the European Software Industry; and (3) to facilitate the widespread uptake of cost estimation techniques by the provision of prototype cost estimation tools. MERMAID developed a family of methods for cost estimation, many of which have had tools implemented in prototypes. These prototypes are best considered as toolkits or workbenches.

  16. Compensation of chief executive officers at nonprofit US hospitals.

    PubMed

    Joynt, Karen E; Le, Sidney T; Orav, E John; Jha, Ashish K

    2014-01-01

    Hospital chief executive officers (CEOs) can shape the priorities and performance of their organizations. The degree to which their compensation is based on their hospitals' quality performance is not well known. To characterize CEO compensation and examine its relation with quality metrics. Retrospective observational study. Participants included 1877 CEOs at 2681 private, nonprofit US hospitals. We used linear regression to identify hospital structural characteristics associated with CEO pay. We then determined the degree to which a hospital's performance on financial metrics, technologic metrics, quality metrics, and community benefit in 2008 was associated with CEO pay in 2009. The CEOs in our sample had a mean compensation of $595,781 (median, $404,938) in 2009. In multivariate analyses, CEO pay was associated with the number of hospital beds overseen ($550 for each additional bed; 95% CI, 429-671; P < .001), teaching status ($425,078 more at major teaching vs nonteaching hospitals; 95% CI, 315,238-534,918; P < .001), and urban location. Hospitals with high levels of advanced technologic capabilities compensated their CEOs $135,862 more (95% CI, 80,744-190,990; P < .001) than did hospitals with low levels of technology. Hospitals with high performance on patient satisfaction compensated their CEOs $51,706 more than did those with low performance on patient satisfaction (95% CI, 15,166-88,247; P = .006). We found no association between CEO pay and hospitals' margins, liquidity, capitalization, occupancy rates, process quality performance, mortality rates, readmission rates, or measures of community benefit. Compensation of CEOs at nonprofit hospitals was highly variable across the country. Compensation was associated with technology and patient satisfaction but not with processes of care, patient outcomes, or community benefit.

  17. MO-A-16A-01: QA Procedures and Metrics: In Search of QA Usability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sathiaseelan, V; Thomadsen, B

    Radiation therapy has undergone considerable changes in the past two decades with a surge of new technology and treatment delivery methods. The complexity of radiation therapy treatments has increased and there has been increased awareness and publicity about the associated risks. In response, there has been proliferation of guidelines for medical physicists to adopt to ensure that treatments are delivered safely. Task Group recommendations are copious, and clinical physicists' hours are longer, stretched to various degrees between site planning and management, IT support, physics QA, and treatment planning responsibilities.Radiation oncology has many quality control practices in place to ensure themore » delivery of high-quality, safe treatments. Incident reporting systems have been developed to collect statistics about near miss events at many radiation oncology centers. However, tools are lacking to assess the impact of these various control measures. A recent effort to address this shortcoming is the work of Ford et al (2012) who recently published a methodology enumerating quality control quantification for measuring the effectiveness of safety barriers. Over 4000 near-miss incidents reported from 2 academic radiation oncology clinics were analyzed using quality control quantification, and a profile of the most effective quality control measures (metrics) was identified.There is a critical need to identify a QA metric to help the busy clinical physicists to focus their limited time and resources most effectively in order to minimize or eliminate errors in the radiation treatment delivery processes. In this symposium the usefulness of workflows and QA metrics to assure safe and high quality patient care will be explored.Two presentations will be given:Quality Metrics and Risk Management with High Risk Radiation Oncology ProceduresStrategies and metrics for quality management in the TG-100 Era Learning Objectives: Provide an overview and the need for QA usability metrics: Different cultures/practices affecting the effectiveness of methods and metrics. Show examples of quality assurance workflows, Statistical process control, that monitor the treatment planning and delivery process to identify errors. To learn to identify and prioritize risks and QA procedures in radiation oncology. Try to answer the question: Can a quality assurance program aided by quality assurance metrics help minimize errors and ensure safe treatment delivery. Should such metrics be institution specific.« less

  18. Comparison and statistical analysis of four write stability metrics in bulk CMOS static random access memory cells

    NASA Astrophysics Data System (ADS)

    Qiu, Hao; Mizutani, Tomoko; Saraya, Takuya; Hiramoto, Toshiro

    2015-04-01

    The commonly used four metrics for write stability were measured and compared based on the same set of 2048 (2k) six-transistor (6T) static random access memory (SRAM) cells by the 65 nm bulk technology. The preferred one should be effective for yield estimation and help predict edge of stability. Results have demonstrated that all metrics share the same worst SRAM cell. On the other hand, compared to butterfly curve with non-normality and write N-curve where no cell state flip happens, bit-line and word-line margins have good normality as well as almost perfect correlation. As a result, both bit line method and word line method prove themselves preferred write stability metrics.

  19. Systems engineering and integration: Cost estimation and benefits analysis

    NASA Technical Reports Server (NTRS)

    Dean, ED; Fridge, Ernie; Hamaker, Joe

    1990-01-01

    Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.

  20. Advancing Measurement Science to Assess Monitoring, Diagnostics, and Prognostics for Manufacturing Robotics

    PubMed Central

    Qiao, Guixiu; Weiss, Brian A.

    2016-01-01

    Unexpected equipment downtime is a ‘pain point’ for manufacturers, especially in that this event usually translates to financial losses. To minimize this pain point, manufacturers are developing new health monitoring, diagnostic, prognostic, and maintenance (collectively known as prognostics and health management (PHM)) techniques to advance the state-of-the-art in their maintenance strategies. The manufacturing community has a wide-range of needs with respect to the advancement and integration of PHM technologies to enhance manufacturing robotic system capabilities. Numerous researchers, including personnel from the National Institute of Standards and Technology (NIST), have identified a broad landscape of barriers and challenges to advancing PHM technologies. One such challenge is the verification and validation of PHM technology through the development of performance metrics, test methods, reference datasets, and supporting tools. Besides documenting and presenting the research landscape, NIST personnel are actively researching PHM for robotics to promote the development of innovative sensing technology and prognostic decision algorithms and to produce a positional accuracy test method that emphasizes the identification of static and dynamic positional accuracy. The test method development will provide manufacturers with a methodology that will allow them to quickly assess the positional health of their robot systems along with supporting the verification and validation of PHM techniques for the robot system. PMID:28058172

  1. Advancing Measurement Science to Assess Monitoring, Diagnostics, and Prognostics for Manufacturing Robotics.

    PubMed

    Qiao, Guixiu; Weiss, Brian A

    2016-01-01

    Unexpected equipment downtime is a 'pain point' for manufacturers, especially in that this event usually translates to financial losses. To minimize this pain point, manufacturers are developing new health monitoring, diagnostic, prognostic, and maintenance (collectively known as prognostics and health management (PHM)) techniques to advance the state-of-the-art in their maintenance strategies. The manufacturing community has a wide-range of needs with respect to the advancement and integration of PHM technologies to enhance manufacturing robotic system capabilities. Numerous researchers, including personnel from the National Institute of Standards and Technology (NIST), have identified a broad landscape of barriers and challenges to advancing PHM technologies. One such challenge is the verification and validation of PHM technology through the development of performance metrics, test methods, reference datasets, and supporting tools. Besides documenting and presenting the research landscape, NIST personnel are actively researching PHM for robotics to promote the development of innovative sensing technology and prognostic decision algorithms and to produce a positional accuracy test method that emphasizes the identification of static and dynamic positional accuracy. The test method development will provide manufacturers with a methodology that will allow them to quickly assess the positional health of their robot systems along with supporting the verification and validation of PHM techniques for the robot system.

  2. NASA Environmentally Responsible Aviation's Highly-Loaded Front Block Compressor Demonstration

    NASA Technical Reports Server (NTRS)

    Celestina, Mark

    2017-01-01

    The ERA project was created in 2009 as part of NASAs Aeronautics Research Mission Directorates (ARMD) Integrated Systems Aviation Program (IASP). The purpose of the ERA project was to explore and document the feasibility, benefit, and technical risk of vehicles concepts and enabling technologies to reduce aviations impact on the environment. The metrics for this technology is given in Figure 1 with the N+2 metrics highlighted in green. It is anticipated that the United States air transportation system will continue to expand significantly over the next few decades thus adversely impacting the environment unless new technology is incorporated to simultaneously reduce nitrous oxides (NOx), noise and fuel consumption. In order to achieve the overall goals and meet the technology insertion challenges, these goals were divided into technical challenges that were to be achieved during the execution of the ERA project. Technical challenges were accomplished through test campaigns conducted by Integrated Technology Demonstration (ITDs). ERAs technical performance period ended in 2015.

  3. NASA Northeast Regional Technology Transfer Center

    NASA Technical Reports Server (NTRS)

    Dunn, James P.

    2001-01-01

    This report is a summary of the primary activities and metrics for the NASA Northeast Regional Technology Transfer Center, operated by the Center for Technology Commercialization, Inc. (CTC). This report covers the contract period January 1, 2000 - March 31, 2001. This report includes a summary of the overall CTC Metrics, a summary of the Major Outreach Events, an overview of the NASA Business Outreach Program, a summary of the Activities and Results of the Technology into the Zone program, and a Summary of the Major Activities and Initiatives performed by CTC in supporting this contract. Between January 1, 2000 and March 31, 2001, CTC has facilitated 10 license agreements, established 35 partnerships, provided assistance 517 times to companies, and performed 593 outreach activities including participation in 57 outreach events. CTC also assisted Goddard in executing a successful 'Technology into the Zone' program.' CTC is pleased to have performed this contract, and looks forward to continue providing their specialized services in support of the new 5 year RTTC Contract for the Northeast region.

  4. Physicists Get INSPIREd: INSPIRE Project and Grid Applications

    NASA Astrophysics Data System (ADS)

    Klem, Jukka; Iwaszkiewicz, Jan

    2011-12-01

    INSPIRE is the new high-energy physics scientific information system developed by CERN, DESY, Fermilab and SLAC. INSPIRE combines the curated and trusted contents of SPIRES database with Invenio digital library technology. INSPIRE contains the entire HEP literature with about one million records and in addition to becoming the reference HEP scientific information platform, it aims to provide new kinds of data mining services and metrics to assess the impact of articles and authors. Grid and cloud computing provide new opportunities to offer better services in areas that require large CPU and storage resources including document Optical Character Recognition (OCR) processing, full-text indexing of articles and improved metrics. D4Science-II is a European project that develops and operates an e-Infrastructure supporting Virtual Research Environments (VREs). It develops an enabling technology (gCube) which implements a mechanism for facilitating the interoperation of its e-Infrastructure with other autonomously running data e-Infrastructures. As a result, this creates the core of an e-Infrastructure ecosystem. INSPIRE is one of the e-Infrastructures participating in D4Science-II project. In the context of the D4Science-II project, the INSPIRE e-Infrastructure makes available some of its resources and services to other members of the resulting ecosystem. Moreover, it benefits from the ecosystem via a dedicated Virtual Organization giving access to an array of resources ranging from computing and storage resources of grid infrastructures to data and services.

  5. High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.

    2004-01-01

    This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.

  6. Model-based metrics of human-automation function allocation in complex work environments

    NASA Astrophysics Data System (ADS)

    Kim, So Young

    Function allocation is the design decision which assigns work functions to all agents in a team, both human and automated. Efforts to guide function allocation systematically has been studied in many fields such as engineering, human factors, team and organization design, management science, and cognitive systems engineering. Each field focuses on certain aspects of function allocation, but not all; thus, an independent discussion of each does not address all necessary issues with function allocation. Four distinctive perspectives emerged from a review of these fields: technology-centered, human-centered, team-oriented, and work-oriented. Each perspective focuses on different aspects of function allocation: capabilities and characteristics of agents (automation or human), team structure and processes, and work structure and the work environment. Together, these perspectives identify the following eight issues with function allocation: 1) Workload, 2) Incoherency in function allocations, 3) Mismatches between responsibility and authority, 4) Interruptive automation, 5) Automation boundary conditions, 6) Function allocation preventing human adaptation to context, 7) Function allocation destabilizing the humans' work environment, and 8) Mission Performance. Addressing these issues systematically requires formal models and simulations that include all necessary aspects of human-automation function allocation: the work environment, the dynamics inherent to the work, agents, and relationships among them. Also, addressing these issues requires not only a (static) model, but also a (dynamic) simulation that captures temporal aspects of work such as the timing of actions and their impact on the agent's work. Therefore, with properly modeled work as described by the work environment, the dynamics inherent to the work, agents, and relationships among them, a modeling framework developed by this thesis, which includes static work models and dynamic simulation, can capture the issues with function allocation. Then, based on the eight issues, eight types of metrics are established. The purpose of these metrics is to assess the extent to which each issue exists with a given function allocation. Specifically, the eight types of metrics assess workload, coherency of a function allocation, mismatches between responsibility and authority, interruptive automation, automation boundary conditions, human adaptation to context, stability of the human's work environment, and mission performance. Finally, to validate the modeling framework and the metrics, a case study was conducted modeling four different function allocations between a pilot and flight deck automation during the arrival and approach phases of flight. A range of pilot cognitive control modes and maximum human taskload limits were also included in the model. The metrics were assessed for these four function allocations and analyzed to validate capability of the metrics to identify important issues in given function allocations. In addition, the design insights provided by the metrics are highlighted. This thesis concludes with a discussion of mechanisms for further validating the modeling framework and function allocation metrics developed here, and highlights where these developments can be applied in research and in the design of function allocations in complex work environments such as aviation operations.

  7. Highway carbon footprinting.

    DOT National Transportation Integrated Search

    2011-09-01

    An effort was undertaken to engage an under-used technology, the TRANSIMS and the MOVES traffic models, in order to advance the utilization of this technology within the state of New York and to examine the value of carbon-footprinting as a metric fo...

  8. Intelligent vehicle control: Opportunities for terrestrial-space system integration

    NASA Technical Reports Server (NTRS)

    Shoemaker, Charles

    1994-01-01

    For 11 years the Department of Defense has cooperated with a diverse array of other Federal agencies including the National Institute of Standards and Technology, the Jet Propulsion Laboratory, and the Department of Energy, to develop robotics technology for unmanned ground systems. These activities have addressed control system architectures supporting sharing of tasks between the system operator and various automated subsystems, man-machine interfaces to intelligent vehicles systems, video compression supporting vehicle driving in low data rate digital communication environments, multiple simultaneous vehicle control by a single operator, path planning and retrace, and automated obstacle detection and avoidance subsystem. Performance metrics and test facilities for robotic vehicles were developed permitting objective performance assessment of a variety of operator-automated vehicle control regimes. Progress in these areas will be described in the context of robotic vehicle testbeds specifically developed for automated vehicle research. These initiatives, particularly as regards the data compression, task sharing, and automated mobility topics, also have relevance in the space environment. The intersection of technology development interests between these two communities will be discussed in this paper.

  9. Developing a Security Metrics Scorecard for Healthcare Organizations.

    PubMed

    Elrefaey, Heba; Borycki, Elizabeth; Kushniruk, Andrea

    2015-01-01

    In healthcare, information security is a key aspect of protecting a patient's privacy and ensuring systems availability to support patient care. Security managers need to measure the performance of security systems and this can be achieved by using evidence-based metrics. In this paper, we describe the development of an evidence-based security metrics scorecard specific to healthcare organizations. Study participants were asked to comment on the usability and usefulness of a prototype of a security metrics scorecard that was developed based on current research in the area of general security metrics. Study findings revealed that scorecards need to be customized for the healthcare setting in order for the security information to be useful and usable in healthcare organizations. The study findings resulted in the development of a security metrics scorecard that matches the healthcare security experts' information requirements.

  10. An Instrumented Glove to Assess Manual Dexterity in Simulation-Based Neurosurgical Education

    PubMed Central

    Lemos, Juan Diego; Hernandez, Alher Mauricio; Soto-Romero, Georges

    2017-01-01

    The traditional neurosurgical apprenticeship scheme includes the assessment of trainee’s manual skills carried out by experienced surgeons. However, the introduction of surgical simulation technology presents a new paradigm where residents can refine surgical techniques on a simulator before putting them into practice in real patients. Unfortunately, in this new scheme, an experienced surgeon will not always be available to evaluate trainee’s performance. For this reason, it is necessary to develop automatic mechanisms to estimate metrics for assessing manual dexterity in a quantitative way. Authors have proposed some hardware-software approaches to evaluate manual dexterity on surgical simulators. This paper presents IGlove, a wearable device that uses inertial sensors embedded on an elastic glove to capture hand movements. Metrics to assess manual dexterity are estimated from sensors signals using data processing and information analysis algorithms. It has been designed to be used with a neurosurgical simulator called Daubara NS Trainer, but can be easily adapted to another benchtop- and manikin-based medical simulators. The system was tested with a sample of 14 volunteers who performed a test that was designed to simultaneously evaluate their fine motor skills and the IGlove’s functionalities. Metrics obtained by each of the participants are presented as results in this work; it is also shown how these metrics are used to automatically evaluate the level of manual dexterity of each volunteer. PMID:28468268

  11. Bayesian performance metrics and small system integration in recent homeland security and defense applications

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Kostrzewski, Andrew; Patton, Edward; Pradhan, Ranjit; Shih, Min-Yi; Walter, Kevin; Savant, Gajendra; Shie, Rick; Forrester, Thomas

    2010-04-01

    In this paper, Bayesian inference is applied to performance metrics definition of the important class of recent Homeland Security and defense systems called binary sensors, including both (internal) system performance and (external) CONOPS. The medical analogy is used to define the PPV (Positive Predictive Value), the basic Bayesian metrics parameter of the binary sensors. Also, Small System Integration (SSI) is discussed in the context of recent Homeland Security and defense applications, emphasizing a highly multi-technological approach, within the broad range of clusters ("nexus") of electronics, optics, X-ray physics, γ-ray physics, and other disciplines.

  12. Workshop Outline and Training Materials for Adult Learners Developed During the Metric Leader Training Project for Maine.

    ERIC Educational Resources Information Center

    Butzow, John W.; Yvon, Bernard R.

    Outlines of five sessions of the Metric Leader Training Project for Maine are included in this publication. Topics are: (1) Introduction - Length - Temperature; (2) Volume/Capacity - Mass/Weight; (3) Metric Advocacy - Length - Area - Temperature; (4) Metric Education Resources; and (5) Metric Education Planning - Metric Olympics - Final…

  13. Progress on Fuel Efficiency and Market Adoption - SuperTruck Factsheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2016-06-30

    The Department of Energy (DOE) launched the SuperTruck initiative in 2009 with the goal of developing and demonstrating a 50 percent improvement in overall freight efficiency (expressed in a ton-mile per gallon metric) for a heavy-duty Class 8 tractor-trailer. To date, the industry teams participating in the initiative have successfully met or are on track to exceed this goal, leveraging suites of technologies that hold significant potential for market success.

  14. Quantifying and Mapping Global Data Poverty.

    PubMed

    Leidig, Mathias; Teeuw, Richard M

    2015-01-01

    Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI). The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this 'proof of concept' study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction.

  15. First NASA Aviation Safety Program Weather Accident Prevention Project Annual Review

    NASA Technical Reports Server (NTRS)

    Colantonio, Ron

    2000-01-01

    The goal of this Annual Review was to present NASA plans and accomplishments that will impact the national aviation safety goal. NASA's WxAP Project focuses on developing the following products: (1) Aviation Weather Information (AWIN) technologies (displays, sensors, pilot decision tools, communication links, etc.); (2) Electronic Pilot Reporting (E-PIREPS) technologies; (3) Enhanced weather products with associated hazard metrics; (4) Forward looking turbulence sensor technologies (radar, lidar, etc.); (5) Turbulence mitigation control system designs; Attendees included personnel from various NASA Centers, FAA, National Weather Service, DoD, airlines, aircraft and pilot associations, industry, aircraft manufacturers and academia. Attendees participated in discussion sessions aimed at collecting aviation user community feedback on NASA plans and R&D activities. This CD is a compilation of most of the presentations presented at this Review.

  16. Measurement System for Energetic Materials Decomposition

    DTIC Science & Technology

    2015-01-05

    scholarships or fellowships for further studies in science, mathematics, engineering or technology fields: Student Metrics This section only applies to...science, mathematics, engineering, or technology fields: The number of undergraduates funded by your agreement who graduated during this period and...will continue to pursue a graduate or Ph.D. degree in science, mathematics, engineering, or technology fields

  17. Electrochemical Positioning of Ordered Nanostructures

    DTIC Science & Technology

    2016-04-26

    or technology fields : Student Metrics This section only applies to graduating undergraduates supported by this agreement in this reporting period The...funded by this agreement who graduated during this period with a degree in science, mathematics, engineering, or technology fields : The number of...engineering, or technology fields :...... ...... ...... ...... ...... PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: PERCENT_SUPPORTEDNAME FTE

  18. High throughput imaging and analysis for biological interpretation of agricultural plants and environmental interaction

    NASA Astrophysics Data System (ADS)

    Hong, Hyundae; Benac, Jasenka; Riggsbee, Daniel; Koutsky, Keith

    2014-03-01

    High throughput (HT) phenotyping of crops is essential to increase yield in environments deteriorated by climate change. The controlled environment of a greenhouse offers an ideal platform to study the genotype to phenotype linkages for crop screening. Advanced imaging technologies are used to study plants' responses to resource limitations such as water and nutrient deficiency. Advanced imaging technologies coupled with automation make HT phenotyping in the greenhouse not only feasible, but practical. Monsanto has a state of the art automated greenhouse (AGH) facility. Handling of the soil, pots water and nutrients are all completely automated. Images of the plants are acquired by multiple hyperspectral and broadband cameras. The hyperspectral cameras cover wavelengths from visible light through short wave infra-red (SWIR). Inhouse developed software analyzes the images to measure plant morphological and biochemical properties. We measure phenotypic metrics like plant area, height, and width as well as biomass. Hyperspectral imaging allows us to measure biochemcical metrics such as chlorophyll, anthocyanin, and foliar water content. The last 4 years of AGH operations on crops like corn, soybean, and cotton have demonstrated successful application of imaging and analysis technologies for high throughput plant phenotyping. Using HT phenotyping, scientists have been showing strong correlations to environmental conditions, such as water and nutrient deficits, as well as the ability to tease apart distinct differences in the genetic backgrounds of crops.

  19. Scientific Wealth in Middle East and North Africa: Productivity, Indigeneity, and Specialty in 1981–2013

    PubMed Central

    Stoppani, Jonathan; Anadon, Laura Diaz; Narayanamurti, Venkatesh

    2016-01-01

    Several developing countries seek to build knowledge-based economies by attempting to expand scientific research capabilities. Characterizing the state and direction of progress in this arena is challenging but important. Here, we employ three metrics: a classical metric of productivity (publications per person), an adapted metric which we denote as Revealed Scientific Advantage (developed from work used to compare publications in scientific fields among countries) to characterize disciplinary specialty, and a new metric, scientific indigeneity (defined as the ratio of publications with domestic corresponding authors) to characterize the locus of scientific activity that also serves as a partial proxy for local absorptive capacity. These metrics—using population and publications data that are available for most countries–allow the characterization of some key features of national scientific enterprise. The trends in productivity and indigeneity when compared across other countries and regions can serve as indicators of strength or fragility in the national research ecosystems, and the trends in specialty can allow regional policy makers to assess the extent to which the areas of focus of research align (or not align) with regional priorities. We apply the metrics to study the Middle East and North Africa (MENA)—a region where science and technology capacity will play a key role in national economic diversification. We analyze 9.8 million publication records between 1981–2013 in 17 countries of MENA from Morocco to Iraq and compare it to selected countries throughout the world. The results show that international collaborators increasingly drove the scientific activity in MENA. The median indigeneity reached 52% in 2013 (indicating that almost half of the corresponding authors were located in foreign countries). Additionally, the regional disciplinary focus in chemical and petroleum engineering is waning with modest growth in the life sciences. We find repeated patterns of stagnation and contraction of scientific activity for several MENA countries contributing to a widening productivity gap on an international comparative yardstick. The results prompt questions about the strength of the developing scientific enterprise and highlight the need for consistent long-term policy for effectively addressing regional challenges with domestic research. PMID:27820831

  20. Evaluation of solid particle number and black carbon for very low particulate matter emissions standards in light-duty vehicles.

    PubMed

    Chang, M-C Oliver; Shields, J Erin

    2017-06-01

    To reliably measure at the low particulate matter (PM) levels needed to meet California's Low Emission Vehicle (LEV III) 3- and 1-mg/mile particulate matter (PM) standards, various approaches other than gravimetric measurement have been suggested for testing purposes. In this work, a feasibility study of solid particle number (SPN, d50 = 23 nm) and black carbon (BC) as alternatives to gravimetric PM mass was conducted, based on the relationship of these two metrics to gravimetric PM mass, as well as the variability of each of these metrics. More than 150 Federal Test Procedure (FTP-75) or Supplemental Federal Test Procedure (US06) tests were conducted on 46 light-duty vehicles, including port-fuel-injected and direct-injected gasoline vehicles, as well as several light-duty diesel vehicles equipped with diesel particle filters (LDD/DPF). For FTP tests, emission variability of gravimetric PM mass was found to be slightly less than that of either SPN or BC, whereas the opposite was observed for US06 tests. Emission variability of PM mass for LDD/DPF was higher than that of both SPN and BC, primarily because of higher PM mass measurement uncertainties (background and precision) near or below 0.1 mg/mile. While strong correlations were observed from both SPN and BC to PM mass, the slopes are dependent on engine technologies and driving cycles, and the proportionality between the metrics can vary over the course of the test. Replacement of the LEV III PM mass emission standard with one other measurement metric may imperil the effectiveness of emission reduction, as a correlation-based relationship may evolve over future technologies for meeting stringent greenhouse standards. Solid particle number and black carbon were suggested in place of PM mass for the California LEV III 1-mg/mile FTP standard. Their equivalence, proportionality, and emission variability in comparison to PM mass, based on a large light-duty vehicle fleet examined, are dependent on engine technologies and driving cycles. Such empirical derived correlations exhibit the limitation of using these metrics for enforcement and certification standards as vehicle combustion and after-treatment technologies advance.

  1. Analyzing critical material demand: A revised approach.

    PubMed

    Nguyen, Ruby Thuy; Fishman, Tomer; Zhao, Fu; Imholte, D D; Graedel, T E

    2018-07-15

    Apparent consumption has been widely used as a metric to estimate material demand. However, with technology advancement and complexity of material use, this metric has become less useful in tracking material flows, estimating recycling feedstocks, and conducting life cycle assessment of critical materials. We call for future research efforts to focus on building a multi-tiered consumption database for the global trade network of critical materials. This approach will help track how raw materials are processed into major components (e.g., motor assemblies) and eventually incorporated into complete pieces of equipment (e.g., wind turbines). Foreseeable challenges would involve: 1) difficulty in obtaining a comprehensive picture of trade partners due to business sensitive information, 2) complexity of materials going into components of a machine, and 3) difficulty maintaining such a database. We propose ways to address these challenges such as making use of digital design, learning from the experience of building similar databases, and developing a strategy for financial sustainability. We recommend that, with the advancement of information technology, small steps toward building such a database will contribute significantly to our understanding of material flows in society and the associated human impacts on the environment. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Towards a Framework for Evaluating and Comparing Diagnosis Algorithms

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia,David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander

    2009-01-01

    Diagnostic inference involves the detection of anomalous system behavior and the identification of its cause, possibly down to a failed unit or to a parameter of a failed unit. Traditional approaches to solving this problem include expert/rule-based, model-based, and data-driven methods. Each approach (and various techniques within each approach) use different representations of the knowledge required to perform the diagnosis. The sensor data is expected to be combined with these internal representations to produce the diagnosis result. In spite of the availability of various diagnosis technologies, there have been only minimal efforts to develop a standardized software framework to run, evaluate, and compare different diagnosis technologies on the same system. This paper presents a framework that defines a standardized representation of the system knowledge, the sensor data, and the form of the diagnosis results and provides a run-time architecture that can execute diagnosis algorithms, send sensor data to the algorithms at appropriate time steps from a variety of sources (including the actual physical system), and collect resulting diagnoses. We also define a set of metrics that can be used to evaluate and compare the performance of the algorithms, and provide software to calculate the metrics.

  3. Space Launch System Advanced Development Office, FY 2013 Annual Report

    NASA Technical Reports Server (NTRS)

    Crumbly, C. M.; Bickley, F. P.; Hueter, U.

    2013-01-01

    The Advanced Development Office (ADO), part of the Space Launch System (SLS) program, provides SLS with the advanced development needed to evolve the vehicle from an initial Block 1 payload capability of 70 metric tons (t) to an eventual capability Block 2 of 130 t, with intermediary evolution options possible. ADO takes existing technologies and matures them to the point that insertion into the mainline program minimizes risk. The ADO portfolio of tasks covers a broad range of technical developmental activities. The ADO portfolio supports the development of advanced boosters, upper stages, and other advanced development activities benefiting the SLS program. A total of 34 separate tasks were funded by ADO in FY 2013.

  4. On Learning: Metrics Based Systems for Countering Asymmetric Threats

    DTIC Science & Technology

    2006-05-25

    of self- education . While this alone cannot guarantee organizational learning , no organization can learn without a spirit of individual learning ...held views on globalization and the impact of Information Age technology . The emerging environment conceptually links to the learning model as part...On Learning : Metrics Based Systems for Countering Asymmetric Threats A Monograph by MAJ Rafael Lopez U.S. Army School of Advanced Military

  5. CUQI: cardiac ultrasound video quality index

    PubMed Central

    Razaak, Manzoor; Martini, Maria G.

    2016-01-01

    Abstract. Medical images and videos are now increasingly part of modern telecommunication applications, including telemedicinal applications, favored by advancements in video compression and communication technologies. Medical video quality evaluation is essential for modern applications since compression and transmission processes often compromise the video quality. Several state-of-the-art video quality metrics used for quality evaluation assess the perceptual quality of the video. For a medical video, assessing quality in terms of “diagnostic” value rather than “perceptual” quality is more important. We present a diagnostic-quality–oriented video quality metric for quality evaluation of cardiac ultrasound videos. Cardiac ultrasound videos are characterized by rapid repetitive cardiac motions and distinct structural information characteristics that are explored by the proposed metric. Cardiac ultrasound video quality index, the proposed metric, is a full reference metric and uses the motion and edge information of the cardiac ultrasound video to evaluate the video quality. The metric was evaluated for its performance in approximating the quality of cardiac ultrasound videos by testing its correlation with the subjective scores of medical experts. The results of our tests showed that the metric has high correlation with medical expert opinions and in several cases outperforms the state-of-the-art video quality metrics considered in our tests. PMID:27014715

  6. Development of Thin Solar Cells for Space Applications at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Dickman, John E.; Hepp, Aloysius; Banger, Kulbinder K.; Harris, Jerry D.; Jin, Michael H.

    2003-01-01

    NASA GRC Thin Film Solar Cell program is developing solar cell technologies for space applications which address two critical metrics: higher specific power (power per unit mass) and lower launch stowed volume. To be considered for space applications, an array using thin film solar cells must offer significantly higher specific power while reducing stowed volume compared to the present technologies being flown on space missions, namely crystalline solar cells. The NASA GRC program is developing single-source precursors and the requisite deposition hardware to grow high-efficiency, thin-film solar cells on polymer substrates at low deposition temperatures. Using low deposition temperatures enables the thin film solar cells to be grown on a variety of polymer substrates, many of which would not survive the high temperature processing currently used to fabricate thin film solar cells. The talk will present the latest results of this research program.

  7. Computer Modeling to Evaluate the Impact of Technology Changes on Resident Procedural Volume.

    PubMed

    Grenda, Tyler R; Ballard, Tiffany N S; Obi, Andrea T; Pozehl, William; Seagull, F Jacob; Chen, Ryan; Cohn, Amy M; Daskin, Mark S; Reddy, Rishindra M

    2016-12-01

    As resident "index" procedures change in volume due to advances in technology or reliance on simulation, it may be difficult to ensure trainees meet case requirements. Training programs are in need of metrics to determine how many residents their institutional volume can support. As a case study of how such metrics can be applied, we evaluated a case distribution simulation model to examine program-level mediastinoscopy and endobronchial ultrasound (EBUS) volumes needed to train thoracic surgery residents. A computer model was created to simulate case distribution based on annual case volume, number of trainees, and rotation length. Single institutional case volume data (2011-2013) were applied, and 10 000 simulation years were run to predict the likelihood (95% confidence interval) of all residents (4 trainees) achieving board requirements for operative volume during a 2-year program. The mean annual mediastinoscopy volume was 43. In a simulation of pre-2012 board requirements (thoracic pathway, 25; cardiac pathway, 10), there was a 6% probability of all 4 residents meeting requirements. Under post-2012 requirements (thoracic, 15; cardiac, 10), however, the likelihood increased to 88%. When EBUS volume (mean 19 cases per year) was concurrently evaluated in the post-2012 era (thoracic, 10; cardiac, 0), the likelihood of all 4 residents meeting case requirements was only 23%. This model provides a metric to predict the probability of residents meeting case requirements in an era of changing volume by accounting for unpredictable and inequitable case distribution. It could be applied across operations, procedures, or disease diagnoses and may be particularly useful in developing resident curricula and schedules.

  8. Electric Propulsion Requirements and Mission Analysis Under NASA's In-Space Propulsion Technology Project

    NASA Technical Reports Server (NTRS)

    Dudzinski, Leonard a.; Pencil, Eric J.; Dankanich, John W.

    2007-01-01

    The In-Space Propulsion Technology Project (ISPT) is currently NASA's sole investment in electric propulsion technologies. This project is managed at NASA Glenn Research Center (GRC) for the NASA Headquarters Science Mission Directorate (SMD). The objective of the electric propulsion project area is to develop near-term and midterm electric propulsion technologies to enhance or enable future NASA science missions while minimizing risk and cost to the end user. Systems analysis activities sponsored by ISPT seek to identify future mission applications in order to quantify mission requirements, as well as develop analytical capability in order to facilitate greater understanding and application of electric propulsion and other propulsion technologies in the ISPT portfolio. These analyses guide technology investments by informing decisions and defining metrics for technology development to meet identified mission requirements. This paper discusses the missions currently being studied for electric propulsion by the ISPT project, and presents the results of recent electric propulsion (EP) mission trades. Recent ISPT systems analysis activities include: an initiative to standardize life qualification methods for various electric propulsion systems in order to retire perceived risk to proposed EP missions; mission analysis to identify EP requirements from Discovery, New Frontiers, and Flagship classes of missions; and an evaluation of system requirements for radioisotope-powered electric propulsion. Progress and early results of these activities is discussed where available.

  9. Metrics Handbook (Air Force Systems Command)

    NASA Astrophysics Data System (ADS)

    1991-08-01

    The handbook is designed to help one develop and use good metrics. It is intended to provide sufficient information to begin developing metrics for objectives, processes, and tasks, and to steer one toward appropriate actions based on the data one collects. It should be viewed as a road map to assist one in arriving at meaningful metrics and to assist in continuous process improvement.

  10. Value-based metrics and Internet-based enterprises

    NASA Astrophysics Data System (ADS)

    Gupta, Krishan M.

    2001-10-01

    Within the last few years, a host of value-based metrics like EVA, MVA, TBR, CFORI, and TSR have evolved. This paper attempts to analyze the validity and applicability of EVA and Balanced Scorecard for Internet based organizations. Despite the collapse of the dot-com model, the firms engaged in e- commerce continue to struggle to find new ways to account for customer-base, technology, employees, knowledge, etc, as part of the value of the firm. While some metrics, like the Balance Scorecard are geared towards internal use, others like EVA are for external use. Value-based metrics are used for performing internal audits as well as comparing firms against one another; and can also be effectively utilized by individuals outside the firm looking to determine if the firm is creating value for its stakeholders.

  11. Leveraging geospatial data, technology, and methods for improving the health of communities: priorities and strategies from an expert panel convened by the CDC.

    PubMed

    Elmore, Kim; Flanagan, Barry; Jones, Nicholas F; Heitgerd, Janet L

    2010-04-01

    In 2008, CDC convened an expert panel to gather input on the use of geospatial science in surveillance, research and program activities focused on CDC's Healthy Communities Goal. The panel suggested six priorities: spatially enable and strengthen public health surveillance infrastructure; develop metrics for geospatial categorization of community health and health inequity; evaluate the feasibility and validity of standard metrics of community health and health inequities; support and develop GIScience and geospatial analysis; provide geospatial capacity building, training and education; and, engage non-traditional partners. Following the meeting, the strategies and action items suggested by the expert panel were reviewed by a CDC subcommittee to determine priorities relative to ongoing CDC geospatial activities, recognizing that many activities may need to occur either in parallel, or occur multiple times across phases. Phase A of the action items centers on developing leadership support. Phase B focuses on developing internal and external capacity in both physical (e.g., software and hardware) and intellectual infrastructure. Phase C of the action items plan concerns the development and integration of geospatial methods. In summary, the panel members provided critical input to the development of CDC's strategic thinking on integrating geospatial methods and research issues across program efforts in support of its Healthy Communities Goal.

  12. A Low-Cost, Passive Navigation Training System for Image-Guided Spinal Intervention.

    PubMed

    Lorias-Espinoza, Daniel; Carranza, Vicente González; de León, Fernando Chico-Ponce; Escamirosa, Fernando Pérez; Martinez, Arturo Minor

    2016-11-01

    Navigation technology is used for training in various medical specialties, not least image-guided spinal interventions. Navigation practice is an important educational component that allows residents to understand how surgical instruments interact with complex anatomy and to learn basic surgical skills such as the tridimensional mental interpretation of bidimensional data. Inexpensive surgical simulators for spinal surgery, however, are lacking. We therefore designed a low-cost spinal surgery simulator (Spine MovDigSys 01) to allow 3-dimensional navigation via 2-dimensional images without altering or limiting the surgeon's natural movement. A training system was developed with an anatomical lumbar model and 2 webcams to passively digitize surgical instruments under MATLAB software control. A proof-of-concept recognition task (vertebral body cannulation) and a pilot test of the system with 12 neuro- and orthopedic surgeons were performed to obtain feedback on the system. Position, orientation, and kinematic variables were determined and the lateral, posteroanterior, and anteroposterior views obtained. The system was tested with a proof-of-concept experimental task. Operator metrics including time of execution (t), intracorporeal length (d), insertion angle (α), average speed (v¯), and acceleration (a) were obtained accurately. These metrics were converted into assessment metrics such as smoothness of operation and linearity of insertion. Results from initial testing are shown and the system advantages and disadvantages described. This low-cost spinal surgery training system digitized the position and orientation of the instruments and allowed image-guided navigation, the generation of metrics, and graphic recording of the instrumental route. Spine MovDigSys 01 is useful for development of basic, noninnate skills and allows the novice apprentice to quickly and economically move beyond the basics. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Theory and Metrics of Community Resilience: A Systematic Literature Review Based on Public Health Guidelines.

    PubMed

    Zamboni, Lucila M

    2017-12-01

    A systematic literature review on quantitative methods to assess community resilience was conducted following Institute of Medicine and Patient-Centered Outcomes Research Institute standards. Community resilience is the ability of a community to bounce back or return to normal after a disaster strikes, yet there is no agreement on what this actually means. All studies reviewed addressed natural disasters, but the methodological approaches can be applied to technological disasters, epidemics, and terrorist attacks. Theoretical frameworks consider the association between vulnerability, resilience, and preparedness, yet these associations vary across frameworks. Because of this complexity, indexes based on composite indicators are the predominant methodological tool used to assess community resilience. Indexes identify similar dimensions but observe resilience at both the individual and geographical levels, reflecting a lack of agreement on what constitutes a community. A consistent, cross-disciplinary metric for community resilience would allow for identifying areas to apply short-term versus long-term interventions. A comparable metric for assessing geographic units in multiple levels and dimensions is an opportunity to identify regional strengths and weaknesses, develop timely targeted policy interventions, improve policy evaluation instruments, and grant allocation formulas design. (Disaster Med Public Health Preparedness. 2017;11:756-763).

  14. Accurate evaluation and analysis of functional genomics data and methods

    PubMed Central

    Greene, Casey S.; Troyanskaya, Olga G.

    2016-01-01

    The development of technology capable of inexpensively performing large-scale measurements of biological systems has generated a wealth of data. Integrative analysis of these data holds the promise of uncovering gene function, regulation, and, in the longer run, understanding complex disease. However, their analysis has proved very challenging, as it is difficult to quickly and effectively assess the relevance and accuracy of these data for individual biological questions. Here, we identify biases that present challenges for the assessment of functional genomics data and methods. We then discuss evaluation methods that, taken together, begin to address these issues. We also argue that the funding of systematic data-driven experiments and of high-quality curation efforts will further improve evaluation metrics so that they more-accurately assess functional genomics data and methods. Such metrics will allow researchers in the field of functional genomics to continue to answer important biological questions in a data-driven manner. PMID:22268703

  15. Multidimensional Interactive Radiology Report and Analysis: standardization of workflow and reporting for renal mass tracking and quantification

    NASA Astrophysics Data System (ADS)

    Hwang, Darryl H.; Ma, Kevin; Yepes, Fernando; Nadamuni, Mridula; Nayyar, Megha; Liu, Brent; Duddalwar, Vinay; Lepore, Natasha

    2015-12-01

    A conventional radiology report primarily consists of a large amount of unstructured text, and lacks clear, concise, consistent and content-rich information. Hence, an area of unmet clinical need consists of developing better ways to communicate radiology findings and information specific to each patient. Here, we design a new workflow and reporting system that combines and integrates advances in engineering technology with those from the medical sciences, the Multidimensional Interactive Radiology Report and Analysis (MIRRA). Until recently, clinical standards have primarily relied on 2D images for the purpose of measurement, but with the advent of 3D processing, many of the manually measured metrics can be automated, leading to better reproducibility and less subjective measurement placement. Hence, we make use this newly available 3D processing in our workflow. Our pipeline is used here to standardize the labeling, tracking, and quantifying of metrics for renal masses.

  16. Enhancing User Customization through Novel Software Architecture for Utility Scale Solar Siting Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brant Peery; Sam Alessi; Randy Lee

    2014-06-01

    There is a need for a spatial decision support application that allows users to create customized metrics for comparing proposed locations of a new solar installation. This document discusses how PVMapper was designed to overcome the customization problem through the development of loosely coupled spatial and decision components in a JavaScript plugin architecture. This allows the user to easily add functionality and data to the system. The paper also explains how PVMapper provides the user with a dynamic and customizable decision tool that enables them to visually modify the formulas that are used in the decision algorithms that convert datamore » to comparable metrics. The technologies that make up the presentation and calculation software stack are outlined. This document also explains the architecture that allows the tool to grow through custom plugins created by the software users. Some discussion is given on the difficulties encountered while designing the system.« less

  17. The LUGPA Alternative Payment Model for Initial Therapy of Newly Diagnosed Patients With Organ-confined Prostate Cancer: Rationale and Development.

    PubMed

    Kapoor, Deepak A; Shore, Neal D; Kirsh, Gary M; Henderson, Jonathan; Cohen, Todd D; Latino, Kathleen

    2017-01-01

    Over the past several decades, rapid expansion in healthcare expenditures has exposed the utilization incentives inherent in fee-for-service payment models. The passage of Medicare Access and CHIP Reauthorization Act of 2015 heralded a transition toward value-based care, creating incentives for practitioners to accept bidirectional risk linked to outcome and utilization metrics. At present, the limited availability of these vehicles excludes all but a handful of providers from participation in alternative payment models (APMs). The LUGPA APM supports the goals of the triple aim in improving the patient experience, enhancing population health and reducing expenditures. By requiring utilization of certified electronic health record technologies, tying payment to quality metrics, and requiring practices to bear more than nominal risk, the LUGPA APM qualifies as an advanced APM, thereby easing the reporting burden and creating opportunities for participating practices.

  18. The LUGPA Alternative Payment Model for Initial Therapy of Newly Diagnosed Patients With Organ-confined Prostate Cancer: Rationale and Development

    PubMed Central

    Kapoor, Deepak A.; Shore, Neal D.; Kirsh, Gary M.; Henderson, Jonathan; Cohen, Todd D.; Latino, Kathleen

    2017-01-01

    Over the past several decades, rapid expansion in healthcare expenditures has exposed the utilization incentives inherent in fee-for-service payment models. The passage of Medicare Access and CHIP Reauthorization Act of 2015 heralded a transition toward value-based care, creating incentives for practitioners to accept bidirectional risk linked to outcome and utilization metrics. At present, the limited availability of these vehicles excludes all but a handful of providers from participation in alternative payment models (APMs). The LUGPA APM supports the goals of the triple aim in improving the patient experience, enhancing population health and reducing expenditures. By requiring utilization of certified electronic health record technologies, tying payment to quality metrics, and requiring practices to bear more than nominal risk, the LUGPA APM qualifies as an advanced APM, thereby easing the reporting burden and creating opportunities for participating practices. PMID:29726849

  19. Advanced Ablative TPS

    NASA Technical Reports Server (NTRS)

    Gasch, Matthew J.

    2011-01-01

    Early NASA missions (Gemini, Apollo, Mars Viking) employed new ablative TPS that were tailored for the entry environment. After 40 years, heritage ablative TPS materials using Viking or Pathfinder era materials are at or near their performance limits and will be inadequate for future exploration missions. Significant advances in TPS materials technology are needed in order to enable any subsequent human exploration missions beyond Low Earth Orbit. This poster summarizes some recent progress at NASA in developing families of advanced rigid/conformable and flexible ablators that could potentially be used for thermal protection in planetary entry missions. In particular the effort focuses technologies required to land heavy (approx.40 metric ton) masses on Mars to facilitate future exploration plans.

  20. Active Debris Removal - A Grand Engineering Challenge for the Twenty-First Century

    NASA Technical Reports Server (NTRS)

    Liou, J.-C.

    2011-01-01

    The collision between Iridium 33 and Cosmos 2251 in 2009 has reignited interest in using active debris removal to remediate the near-Earth orbital debris environment. A recent NASA study shows that, in order to stabilize the environment in the low Earth orbit (LEO) region for the next 200 years, active debris removal of about five large and massive (1 to more than 8 metric tons) objects per year is needed. To develop the capability to remove five of those objects per year in a cost-effective manner truly represents a grand challenge in engineering and technology development.

  1. Modeling technology innovation: how science, engineering, and industry methods can combine to generate beneficial socioeconomic impacts.

    PubMed

    Stone, Vathsala I; Lane, Joseph P

    2012-05-16

    Government-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact-that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs. This paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor. The resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and "bench to bedside" expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes. High-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits.

  2. Modeling technology innovation: How science, engineering, and industry methods can combine to generate beneficial socioeconomic impacts

    PubMed Central

    2012-01-01

    Background Government-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact—that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs. Methods This paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor. Results The resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and “bench to bedside” expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes. Conclusions High-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits. PMID:22591638

  3. Wind energy in the United States and materials required for the land-based wind turbine industry from 2010 through 2030

    USGS Publications Warehouse

    Wilburn, David R.

    2011-01-01

    The generation of electricity in the United States from wind-powered turbines is increasing. An understanding of the sources and abundance of raw materials required by the wind turbine industry and the many uses for these materials is necessary to assess the effect of this industry's growth on future demand for selected raw materials relative to the historical demand for these materials. The U.S. Geological Survey developed estimates of future requirements for raw (and some recycled) materials based on the assumption that wind energy will supply 20 percent of the electricity consumed in the United States by 2030. Economic, environmental, political, and technological considerations and trends reported for 2009 were used as a baseline. Estimates for the quantity of materials in typical "current generation" and "next generation" wind turbines were developed. In addition, estimates for the annual and total material requirements were developed based on the growth necessary for wind energy when converted in a wind powerplant to generate 20 percent of the U.S. supply of electricity by 2030. The results of the study suggest that achieving the market goal of 20 percent by 2030 would require an average annual consumption of about 6.8 million metric tons of concrete, 1.5 million metric tons of steel, 310,000 metric tons of cast iron, 40,000 metric tons of copper, and 380 metric tons of the rare-earth element neodymium. With the exception of neodymium, these material requirements represent less than 3 percent of the U.S. apparent consumption for 2008. Recycled material could supply about 3 percent of the total steel required for wind turbine production from 2010 through 2030, 4 percent of the aluminum required, and 3 percent of the copper required. The data suggest that, with the possible exception of rare-earth elements, there should not be a shortage of the principal materials required for electricity generation from wind energy. There may, however, be selective manufacturing shortages if the total demand for raw materials from all markets is greater than the available supply of these materials or the capacity of industry to manufacture components. Changing economic conditions could also affect the development schedule of anticipated capacity.

  4. Integrated Technology Assessment Center (ITAC) Update

    NASA Technical Reports Server (NTRS)

    Taylor, J. L.; Neely, M. A.; Curran, F. M.; Christensen, E. R.; Escher, D.; Lovell, N.; Morris, Charles (Technical Monitor)

    2002-01-01

    The Integrated Technology Assessment Center (ITAC) has developed a flexible systems analysis framework to identify long-term technology needs, quantify payoffs for technology investments, and assess the progress of ASTP-sponsored technology programs in the hypersonics area. For this, ITAC has assembled an experienced team representing a broad sector of the aerospace community and developed a systematic assessment process complete with supporting tools. Concepts for transportation systems are selected based on relevance to the ASTP and integrated concept models (ICM) of these concepts are developed. Key technologies of interest are identified and projections are made of their characteristics with respect to their impacts on key aspects of the specific concepts of interest. Both the models and technology projections are then fed into the ITAC's probabilistic systems analysis framework in ModelCenter. This framework permits rapid sensitivity analysis, single point design assessment, and a full probabilistic assessment of each concept with respect to both embedded and enhancing technologies. Probabilistic outputs are weighed against metrics of interest to ASTP using a multivariate decision making process to provide inputs for technology prioritization within the ASTP. ITAC program is currently finishing the assessment of a two-stage-to-orbit (TSTO), rocket-based combined cycle (RBCC) concept and a TSTO turbine-based combined cycle (TBCC) concept developed by the team with inputs from NASA. A baseline all rocket TSTO concept is also being developed for comparison. Boeing has recently submitted a performance model for their Flexible Aerospace System Solution for Tomorrow (FASST) concept and the ISAT program will provide inputs for a single-stage-to-orbit (SSTO) TBCC based concept in the near-term. Both of these latter concepts will be analyzed within the ITAC framework over the summer. This paper provides a status update of the ITAC program.

  5. Develop metrics of tire debris on Texas highways : [project summary].

    DOT National Transportation Integrated Search

    2017-05-01

    This research developed metrics on the amount and characteristics of tire debris generated on Texas highways. These metrics provide numerical, data-based rates for districts to anticipate the amounts and characteristics of tire debris and to plan rem...

  6. System Analyses of Pneumatic Technology for High Speed Civil Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Mavris, Dimitri N.; Tai, Jimmy C.; Kirby, Michelle M.; Roth, Bryce A.

    1999-01-01

    The primary aspiration of this study was to objectively assess the feasibility of the application of a low speed pneumatic technology, in particular Circulation Control (CC) to an HSCT concept. Circulation Control has been chosen as an enabling technology to be applied on a generic High Speed Civil Transport (HSCT). This technology has been proven for various subsonic vehicles including flight tests on a Navy A-6 and computational application on a Boeing 737. Yet, CC has not been widely accepted for general commercial fixed-wing use but its potential has been extensively investigated for decades in wind tunnels across the globe for application to rotorcraft. More recently, an experimental investigation was performed at Georgia Tech Research Institute (GTRI) with application to an HSCT-type configuration. The data from those experiments was to be applied to a full-scale vehicle to assess the impact from a system level point of view. Hence, this study attempted to quantitatively assess the impact of this technology to an HSCT. The study objective was achieved in three primary steps: 1) Defining the need for CC technology; 2) Wind tunnel data reduction; 3) Detailed takeoff/landing performance assessment. Defining the need for the CC technology application to an HSCT encompassed a preliminary system level analysis. This was accomplished through the utilization of recent developments in modern aircraft design theory at Aerospace Systems Design Laboratory (ASDL). These developments include the creation of techniques and methods needed for the identification of technical feasibility show stoppers. These techniques and methods allow the designer to rapidly assess a design space and disciplinary metric enhancements to enlarge or improve the design space. The takeoff and landing field lengths were identified as the concept "show-stoppers". Once the need for CC was established, the actual application of data and trends was assessed. This assessment entailed a reduction of the wind tunnel data from the experiments performed by Mr. Bob Englar at the GTRI. Relevant data was identified and manipulated based on the required format of the analysis tools utilized. Propulsive, aerodynamic, duct sizing, and vehicle sizing investigations were performed and information supplied to a detailed takeoff and landing tool, From the assessments, CC was shown to improve the low speed performance metrics, which were previously not satisfied. An HSCT with CC augmentation does show potential for full-scale application. Yet, an economic assessment of an HSCT with and without CC showed that a moderate penalty was incurred from the increased RDT&E costs associated with developing the CC technology and slight increases in empty weight.

  7. Extensible Open-Source Zero-Footprint Web Viewer for Oncologic Imaging Research | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    The Tumor Imaging Metrics Core (TIMC), a CCSG Shared-Resource of the Dana-Farber/Harvard Cancer Center, has developed software for managing the workflow and image measurements for oncology clinical trials. This system currently is in use across the five Harvard hospitals to manage over 600 active clinical trials, with 800 users, and has been licensed and implemented at several other Cancer Centers, including Yale, Utah/Huntsman Cancer Institute, and UW/Seattle Cancer Care Alliance.

  8. Analysis of Metrics for Human Detection behind Walls Using Experimental 3-D Synthetic Aperture Radar Imagery

    DTIC Science & Technology

    2014-12-01

    Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2014 Abstract …….. Defence Research and Development... International Symposium on Phased Array Systems and Technology, 551-558 (2010). [10] Chetty, K., Smith, G. E., Woodbridge, K., “Through-the-wall sensing of...radar,” IEEE International Conference on Communications and Signal Processing, 579-583 (2011). [13] Sévigny, P., DiFilippo, D., Laneve, T., Chan, B

  9. The future of simulation technologies for complex cardiovascular procedures.

    PubMed

    Cates, Christopher U; Gallagher, Anthony G

    2012-09-01

    Changing work practices and the evolution of more complex interventions in cardiovascular medicine are forcing a paradigm shift in the way doctors are trained. Implantable cardioverter defibrillator (ICD), transcatheter aortic valve implantation (TAVI), carotid artery stenting (CAS), and acute stroke intervention procedures are forcing these changes at a faster pace than in other disciplines. As a consequence, cardiovascular medicine has had to develop a sophisticated understanding of precisely what is meant by 'training' and 'skill'. An evolving conclusion is that procedure training on a virtual reality (VR) simulator presents a viable current solution. These simulations should characterize the important performance characteristics of procedural skill that have metrics derived and defined from, and then benchmarked to experienced operators (i.e. level of proficiency). Simulation training is optimal with metric-based feedback, particularly formative trainee error assessments, proximate to their performance. In prospective, randomized studies, learners who trained to a benchmarked proficiency level on the simulator performed significantly better than learners who were traditionally trained. In addition, cardiovascular medicine now has available the most sophisticated virtual reality simulators in medicine and these have been used for the roll-out of interventions such as CAS in the USA and globally with cardiovascular society and industry partnered training programmes. The Food and Drug Administration has advocated the use of VR simulation as part of the approval of new devices and the American Board of Internal Medicine has adopted simulation as part of its maintenance of certification. Simulation is rapidly becoming a mainstay of cardiovascular education, training, certification, and the safe adoption of new technology. If cardiovascular medicine is to continue to lead in the adoption and integration of simulation, then, it must take a proactive position in the development of metric-based simulation curriculum, adoption of proficiency benchmarking definitions, and then resolve to commit resources so as to continue to lead this revolution in physician training.

  10. Label-free cell separation and sorting in microfluidic systems

    PubMed Central

    Gossett, Daniel R.; Weaver, Westbrook M.; Mach, Albert J.; Hur, Soojung Claire; Tse, Henry Tat Kwong; Lee, Wonhee; Amini, Hamed

    2010-01-01

    Cell separation and sorting are essential steps in cell biology research and in many diagnostic and therapeutic methods. Recently, there has been interest in methods which avoid the use of biochemical labels; numerous intrinsic biomarkers have been explored to identify cells including size, electrical polarizability, and hydrodynamic properties. This review highlights microfluidic techniques used for label-free discrimination and fractionation of cell populations. Microfluidic systems have been adopted to precisely handle single cells and interface with other tools for biochemical analysis. We analyzed many of these techniques, detailing their mode of separation, while concentrating on recent developments and evaluating their prospects for application. Furthermore, this was done from a perspective where inertial effects are considered important and general performance metrics were proposed which would ease comparison of reported technologies. Lastly, we assess the current state of these technologies and suggest directions which may make them more accessible. Figure A wide range of microfluidic technologies have been developed to separate and sort cells by taking advantage of differences in their intrinsic biophysical properties PMID:20419490

  11. Using Technology Readiness Level (TRL), Life Cycle Cost (LCC), and Other Metrics to Supplement Equivalent System Mass (ESM) in Advanced Life Support (ALS)

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2003-01-01

    The ALS project plan goals are reducing cost, improving performance, and achieving flight readiness. ALS selects projects to advance the mission readiness of low cost, high performance technologies. The role of metrics is to help select good projects and report progress. The Equivalent Mass (EM) of a system is the sum of the estimated mass of the hardware, of its required materials and spares, and of the pressurized volume, power supply, and cooling system needed to support the hardware in space. EM is the total payload launch mass needed to provide and support a system. EM is directly proportional to the launch cost.

  12. Empirical Evaluation of Hunk Metrics as Bug Predictors

    NASA Astrophysics Data System (ADS)

    Ferzund, Javed; Ahsan, Syed Nadeem; Wotawa, Franz

    Reducing the number of bugs is a crucial issue during software development and maintenance. Software process and product metrics are good indicators of software complexity. These metrics have been used to build bug predictor models to help developers maintain the quality of software. In this paper we empirically evaluate the use of hunk metrics as predictor of bugs. We present a technique for bug prediction that works at smallest units of code change called hunks. We build bug prediction models using random forests, which is an efficient machine learning classifier. Hunk metrics are used to train the classifier and each hunk metric is evaluated for its bug prediction capabilities. Our classifier can classify individual hunks as buggy or bug-free with 86 % accuracy, 83 % buggy hunk precision and 77% buggy hunk recall. We find that history based and change level hunk metrics are better predictors of bugs than code level hunk metrics.

  13. A Step Towards Developing Adaptive Robot-Mediated Intervention Architecture (ARIA) for Children With Autism

    PubMed Central

    Bekele, Esubalew T; Lahiri, Uttama; Swanson, Amy R.; Crittendon, Julie A.; Warren, Zachary E.; Sarkar, Nilanjan

    2013-01-01

    Emerging technology, especially robotic technology, has been shown to be appealing to children with autism spectrum disorders (ASD). Such interest may be leveraged to provide repeatable, accurate and individualized intervention services to young children with ASD based on quantitative metrics. However, existing robot-mediated systems tend to have limited adaptive capability that may impact individualization. Our current work seeks to bridge this gap by developing an adaptive and individualized robot-mediated technology for children with ASD. The system is composed of a humanoid robot with its vision augmented by a network of cameras for real-time head tracking using a distributed architecture. Based on the cues from the child’s head movement, the robot intelligently adapts itself in an individualized manner to generate prompts and reinforcements with potential to promote skills in the ASD core deficit area of early social orienting. The system was validated for feasibility, accuracy, and performance. Results from a pilot usability study involving six children with ASD and a control group of six typically developing (TD) children are presented. PMID:23221831

  14. Automatic extraction and visualization of object-oriented software design metrics

    NASA Astrophysics Data System (ADS)

    Lakshminarayana, Anuradha; Newman, Timothy S.; Li, Wei; Talburt, John

    2000-02-01

    Software visualization is a graphical representation of software characteristics and behavior. Certain modes of software visualization can be useful in isolating problems and identifying unanticipated behavior. In this paper we present a new approach to aid understanding of object- oriented software through 3D visualization of software metrics that can be extracted from the design phase of software development. The focus of the paper is a metric extraction method and a new collection of glyphs for multi- dimensional metric visualization. Our approach utilize the extensibility interface of a popular CASE tool to access and automatically extract the metrics from Unified Modeling Language class diagrams. Following the extraction of the design metrics, 3D visualization of these metrics are generated for each class in the design, utilizing intuitively meaningful 3D glyphs that are representative of the ensemble of metrics. Extraction and visualization of design metrics can aid software developers in the early study and understanding of design complexity.

  15. Use of Normalized Difference Vegetation Index (NDVI) habitat models to predict breeding birds on the San Pedro River, Arizona

    USGS Publications Warehouse

    McFarland, Tiffany Marie; van Riper, Charles

    2013-01-01

    Successful management practices of avian populations depend on understanding relationships between birds and their habitat, especially in rare habitats, such as riparian areas of the desert Southwest. Remote-sensing technology has become popular in habitat modeling, but most of these models focus on single species, leaving their applicability to understanding broader community structure and function largely untested. We investigated the usefulness of two Normalized Difference Vegetation Index (NDVI) habitat models to model avian abundance and species richness on the upper San Pedro River in southeastern Arizona. Although NDVI was positively correlated with our bird metrics, the amount of explained variation was low. We then investigated the addition of vegetation metrics and other remote-sensing metrics to improve our models. Although both vegetation metrics and remotely sensed metrics increased the power of our models, the overall explained variation was still low, suggesting that general avian community structure may be too complex for NDVI models.

  16. Search Pathways: Modeling GeoData Search Behavior to Support Usable Application Development

    NASA Astrophysics Data System (ADS)

    Yarmey, L.; Rosati, A.; Tressel, S.

    2014-12-01

    Recent technical advances have enabled development of new scientific data discovery systems. Metadata brokering, linked data, and other mechanisms allow users to discover scientific data of interes across growing volumes of heterogeneous content. Matching this complex content with existing discovery technologies, people looking for scientific data are presented with an ever-growing array of features to sort, filter, subset, and scan through search returns to help them find what they are looking for. This paper examines the applicability of available technologies in connecting searchers with the data of interest. What metrics can be used to track success given shifting baselines of content and technology? How well do existing technologies map to steps in user search patterns? Taking a user-driven development approach, the team behind the Arctic Data Explorer interdisciplinary data discovery application invested heavily in usability testing and user search behavior analysis. Building on earlier library community search behavior work, models were developed to better define the diverse set of thought processes and steps users took to find data of interest, here called 'search pathways'. This research builds a deeper understanding of the user community that seeks to reuse scientific data. This approach ensures that development decisions are driven by clearly articulated user needs instead of ad hoc technology trends. Initial results from this research will be presented along with lessons learned for other discovery platform development and future directions for informatics research into search pathways.

  17. Research and Technology Development for Construction of 3d Video Scenes

    NASA Astrophysics Data System (ADS)

    Khlebnikova, Tatyana A.

    2016-06-01

    For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.

  18. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  19. 2.0 AEDL Systems Engineering

    NASA Technical Reports Server (NTRS)

    Graves, Claude

    2005-01-01

    Some engineering topics: Some Initial Thoughts. Capability Description. Capability State-of-the-Art. Capability Requirements. Systems Engineering. Capability Roadmap. Capability Maturity. Candidate Technologies. Metrics.

  20. Mobile Robotic Telepresence Solutions for the Education of Hospitalized Children.

    PubMed

    Soares, Neelkamal; Kay, Jeffrey C; Craven, Geoff

    2017-01-01

    Hospitalization affects children's school attendance, resulting in poor academic and sociodevelopmental outcomes. The increasing ubiquity of mobile and tablet technology in educational and healthcare environments, and the growth of the mobile robotic telepresence (MRT) industry, offer opportunities for the use of MRT to connect hospitalized children to their school environments. This article describes an approach at one rural healthcare center in collaboration with local school districts with the aim of describing strategies and limitations of MRT use. Future research is needed on MRT implementation, from user experiences to operational strategies, and outcome metrics need to be developed to measure academic and socioemotional outcomes. By partnering with educational systems and using this technology, hospital information technology personnel can help hospitalized children engage with their school environments to maintain connections with peers and access academic instruction.

  1. U50: A New Metric for Measuring Assembly Output Based on Non-Overlapping, Target-Specific Contigs.

    PubMed

    Castro, Christina J; Ng, Terry Fei Fan

    2017-11-01

    Advances in next-generation sequencing technologies enable routine genome sequencing, generating millions of short reads. A crucial step for full genome analysis is the de novo assembly, and currently, performance of different assembly methods is measured by a metric called N 50 . However, the N 50 value can produce skewed, inaccurate results when complex data are analyzed, especially for viral and microbial datasets. To provide a better assessment of assembly output, we developed a new metric called U 50 . The U 50 identifies unique, target-specific contigs by using a reference genome as baseline, aiming at circumventing some limitations that are inherent to the N 50 metric. Specifically, the U 50 program removes overlapping sequence of multiple contigs by utilizing a mask array, so the performance of the assembly is only measured by unique contigs. We compared simulated and real datasets by using U 50 and N 50 , and our results demonstrated that U 50 has the following advantages over N 50 : (1) reducing erroneously large N 50 values due to a poor assembly, (2) eliminating overinflated N 50 values caused by large measurements from overlapping contigs, (3) eliminating diminished N 50 values caused by an abundance of small contigs, and (4) allowing comparisons across different platforms or samples based on the new percentage-based metric UG 50 %. The use of the U 50 metric allows for a more accurate measure of assembly performance by analyzing only the unique, non-overlapping contigs. In addition, most viral and microbial sequencing have high background noise (i.e., host and other non-targets), which contributes to having a skewed, misrepresented N 50 value-this is corrected by U 50 . Also, the UG 50 % can be used to compare assembly results from different samples or studies, the cross-comparisons of which cannot be performed with N 50 .

  2. Optical nano artifact metrics using silicon random nanostructures

    NASA Astrophysics Data System (ADS)

    Matsumoto, Tsutomu; Yoshida, Naoki; Nishio, Shumpei; Hoga, Morihisa; Ohyagi, Yasuyuki; Tate, Naoya; Naruse, Makoto

    2016-08-01

    Nano-artifact metrics exploit unique physical attributes of nanostructured matter for authentication and clone resistance, which is vitally important in the age of Internet-of-Things where securing identities is critical. However, expensive and huge experimental apparatuses, such as scanning electron microscopy, have been required in the former studies. Herein, we demonstrate an optical approach to characterise the nanoscale-precision signatures of silicon random structures towards realising low-cost and high-value information security technology. Unique and versatile silicon nanostructures are generated via resist collapse phenomena, which contains dimensions that are well below the diffraction limit of light. We exploit the nanoscale precision ability of confocal laser microscopy in the height dimension; our experimental results demonstrate that the vertical precision of measurement is essential in satisfying the performances required for artifact metrics. Furthermore, by using state-of-the-art nanostructuring technology, we experimentally fabricate clones from the genuine devices. We demonstrate that the statistical properties of the genuine and clone devices are successfully exploited, showing that the liveness-detection-type approach, which is widely deployed in biometrics, is valid in artificially-constructed solid-state nanostructures. These findings pave the way for reasonable and yet sufficiently secure novel principles for information security based on silicon random nanostructures and optical technologies.

  3. TechTracS: NASA's commercial technology management system

    NASA Astrophysics Data System (ADS)

    Barquinero, Kevin; Cannon, Douglas

    1996-03-01

    The Commercial Technology Mission is a primary NASA mission, comparable in importance to those in aeronautics and space. This paper will discuss TechTracS, NASA Commercial Technology Management System that has been put into place in FY 1995 to implement this mission. This system is designed to identify and capture the NASA technologies which have commercial potential into an off-the-shelf database application, and then track the technologies' progress in realizing the commercial potential through collaborations with industry. The management system consists of four stages. The first is to develop an inventory database of the agency's entire technology portfolio and assess it for relevance to the commercial marketplace. Those technologies that are identified as having commercial potential will then be actively marketed to appropriate industries—this is the second stage. The third stage is when a NASA-industry partnership is entered into for the purposes of commercializing the technology. The final stage is to track the technology's success or failure in the marketplace. The collection of this information in TechTracS enables metrics evaluation and can accelerate the establishment on direct contacts between and NASA technologist and an industry technologist. This connection is the beginning of the technology commercialization process.

  4. Department of Defense Energy and Logistics: Implications of Historic and Future Cost, Risk, and Capability Analysis

    NASA Astrophysics Data System (ADS)

    Tisa, Paul C.

    Every year the DoD spends billions satisfying its large petroleum demand. This spending is highly sensitive to uncontrollable and poorly understood market forces. Additionally, while some stakeholders may not prioritize its monetary cost and risk, energy is fundamentally coupled to other critical factors. Energy, operational capability, and logistics are heavily intertwined and dependent on uncertain security environment and technology futures. These components and their relationships are less understood. Without better characterization, future capabilities may be significantly limited by present-day acquisition decisions. One attempt to demonstrate these costs and risks to decision makers has been through a metric known as the Fully Burdened Cost of Energy (FBCE). FBCE is defined as the commodity price for fuel plus many of these hidden costs. The metric encouraged a valuable conversation and is still required by law. However, most FBCE development stopped before the lessons from that conversation were incorporated. Current implementation is easy to employ but creates little value. Properly characterizing the costs and risks of energy and putting them in a useful tradespace requires a new framework. This research aims to highlight energy's complex role in many aspects of military operations, the critical need to incorporate it in decisions, and a novel framework to do so. It is broken into five parts. The first describes the motivation behind FBCE, the limits of current implementation, and outlines a new framework that aids decisions. Respectively, the second, third, and fourth present a historic analysis of the connections between military capabilities and energy, analyze the recent evolution of this conversation within the DoD, and pull the historic analysis into a revised framework. The final part quantifies the potential impacts of deeply uncertain futures and technological development and introduces an expanded framework that brings capability, energy, and their uncertainty into the same tradespace. The work presented is intended to inform better policies and investment decisions for military acquisitions. The discussion highlights areas within the DoD's understanding of energy that could improve or whose development has faltered. The new metric discussed allows the DoD to better manage and plan for long-term energy-related costs and risk.

  5. TH-F-209-01: Pitfalls: Reliability and Performance of Diagnostic X-Ray Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behling, R.

    Purpose: Performance and reliability of medical X-ray tubes for imaging are crucial from an ethical, clinical and economic perspective. This lecture will deliver insight into the aspects to consider during the decision making process to invest in X-ray imaging equipment. Outdated metric still hampers realistic product comparison. It is time to change this and to comply with latest standards, which consider current technology. Failure modes and ways to avoid down-time of the equipment shall be discussed. In view of the increasing number of interventional procedures and the hazards associated with ionizing radiation, toxic contrast agents, and the combination thereof, themore » aspect of system reliability is of paramount importance. Methods: A comprehensive picture of trends for different modalities (CT, angiography, general radiology) has been drawn and led to the development of novel X-ray tube technology. Results: Recent X-ray tubes feature enhanced reliability and unprecedented performance. Relevant metrics for product comparison still have to be implemented in practice. Conclusion: The speed of scientific and industrial development of new diagnostic and therapeutic X-ray sources remains tremendous. Still, users suffer from gaps between desire and reality in day-to-day diagnostic routine. X-ray sources are still limiting cutting-edge medical procedures. Side-effects of wear and tear, limitations of the clinical work flow, costs, the characteristics of the X-ray spectrum and others topics need to be further addressed. New applications and modalities, like detection-based color-resolved X-ray and phase-contrast / dark-field imaging will impact the course of new developments of X-ray sources. Learning Objectives: Understand the basic requirements on medical diagnostic X-ray sources per modality Learn to select the optimal equipment employing state-of-the-art metric Know causes of failures, depending on the way X-ray sources are operated Understand methods to remediate critical situations Understand the meaning of different warranty models I am employee of Royal Philips; R. Behling, No external funding. I am employee of Royal Philips.« less

  6. TH-F-209-00: Pitfalls: Reliability and Performance of Diagnostic X-Ray Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Purpose: Performance and reliability of medical X-ray tubes for imaging are crucial from an ethical, clinical and economic perspective. This lecture will deliver insight into the aspects to consider during the decision making process to invest in X-ray imaging equipment. Outdated metric still hampers realistic product comparison. It is time to change this and to comply with latest standards, which consider current technology. Failure modes and ways to avoid down-time of the equipment shall be discussed. In view of the increasing number of interventional procedures and the hazards associated with ionizing radiation, toxic contrast agents, and the combination thereof, themore » aspect of system reliability is of paramount importance. Methods: A comprehensive picture of trends for different modalities (CT, angiography, general radiology) has been drawn and led to the development of novel X-ray tube technology. Results: Recent X-ray tubes feature enhanced reliability and unprecedented performance. Relevant metrics for product comparison still have to be implemented in practice. Conclusion: The speed of scientific and industrial development of new diagnostic and therapeutic X-ray sources remains tremendous. Still, users suffer from gaps between desire and reality in day-to-day diagnostic routine. X-ray sources are still limiting cutting-edge medical procedures. Side-effects of wear and tear, limitations of the clinical work flow, costs, the characteristics of the X-ray spectrum and others topics need to be further addressed. New applications and modalities, like detection-based color-resolved X-ray and phase-contrast / dark-field imaging will impact the course of new developments of X-ray sources. Learning Objectives: Understand the basic requirements on medical diagnostic X-ray sources per modality Learn to select the optimal equipment employing state-of-the-art metric Know causes of failures, depending on the way X-ray sources are operated Understand methods to remediate critical situations Understand the meaning of different warranty models I am employee of Royal Philips; R. Behling, No external funding. I am employee of Royal Philips.« less

  7. Rice by Weight, Other Produce by Bulk, and Snared Iguanas at So Much Per One. A Talk on Measurement Standards and on Metric Conversion.

    ERIC Educational Resources Information Center

    Allen, Harold Don

    This script for a short radio broadcast on measurement standards and metric conversion begins by tracing the rise of the metric system in the international marketplace. Metric units are identified and briefly explained. Arguments for conversion to metric measures are presented. The history of the development and acceptance of the metric system is…

  8. Surveillance metrics sensitivity study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamada, Michael S.; Bierbaum, Rene Lynn; Robertson, Alix A.

    2011-09-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculationsmore » and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.« less

  9. Surveillance Metrics Sensitivity Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bierbaum, R; Hamada, M; Robertson, A

    2011-11-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculationsmore » and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.« less

  10. Framework for performance evaluation of face, text, and vehicle detection and tracking in video: data, metrics, and protocol.

    PubMed

    Kasturi, Rangachar; Goldgof, Dmitry; Soundararajan, Padmanabhan; Manohar, Vasant; Garofolo, John; Bowers, Rachel; Boonstra, Matthew; Korzhova, Valentina; Zhang, Jing

    2009-02-01

    Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objectively compare the performance of different algorithms and algorithmic improvements. In this paper, we present such a framework for evaluating object detection and tracking in video: specifically for face, text, and vehicle objects. This framework includes the source video data, ground-truth annotations (along with guidelines for annotation), performance metrics, evaluation protocols, and tools including scoring software and baseline algorithms. For each detection and tracking task and supported domain, we developed a 50-clip training set and a 50-clip test set. Each data clip is approximately 2.5 minutes long and has been completely spatially/temporally annotated at the I-frame level. Each task/domain, therefore, has an associated annotated corpus of approximately 450,000 frames. The scope of such annotation is unprecedented and was designed to begin to support the necessary quantities of data for robust machine learning approaches, as well as a statistically significant comparison of the performance of algorithms. The goal of this work was to systematically address the challenges of object detection and tracking through a common evaluation framework that permits a meaningful objective comparison of techniques, provides the research community with sufficient data for the exploration of automatic modeling techniques, encourages the incorporation of objective evaluation into the development process, and contributes useful lasting resources of a scale and magnitude that will prove to be extremely useful to the computer vision research community for years to come.

  11. The development of a virtual reality training curriculum for colonoscopy.

    PubMed

    Sugden, Colin; Aggarwal, Rajesh; Banerjee, Amrita; Haycock, Adam; Thomas-Gibson, Siwan; Williams, Christopher B; Darzi, Ara

    2012-07-01

    The development of a structured virtual reality (VR) training curriculum for colonoscopy using high-fidelity simulation. Colonoscopy requires detailed knowledge and technical skill. Changes to working practices in recent times have reduced the availability of traditional training opportunities. Much might, therefore, be achieved by applying novel technologies such as VR simulation to colonoscopy. Scientifically developed device-specific curricula aim to maximize the yield of laboratory-based training by focusing on validated modules and linking progression to the attainment of benchmarked proficiency criteria. Fifty participants comprised of 30 novices (<10 colonoscopies), 10 intermediates (100 to 500 colonoscopies), and 10 experienced (>500 colonoscopies) colonoscopists were recruited to participate. Surrogates of proficiency, such as number of procedures undertaken, determined prospective allocation to 1 of 3 groups (novice, intermediate, and experienced). Construct validity and learning value (comparison between groups and within groups respectively) for each task and metric on the chosen simulator model determined suitability for inclusion in the curriculum. Eight tasks in possession of construct validity and significant learning curves were included in the curriculum: 3 abstract tasks, 4 part-procedural tasks, and 1 procedural task. The whole-procedure task was valid for 11 metrics including the following: "time taken to complete the task" (1238, 343, and 293 s; P < 0.001) and "insertion length with embedded tip" (23.8, 3.6, and 4.9 cm; P = 0.005). Learning curves consistently plateaued at or beyond the ninth attempt. Valid metrics were used to define benchmarks, derived from the performance of the experienced cohort, for each included task. A comprehensive, stratified, benchmarked, whole-procedure curriculum has been developed for a modern high-fidelity VR colonoscopy simulator.

  12. Energy Technology Investments: Maximizing Efficiency Through a Maritime Energy Portfolio Interface and Decision Aid

    DTIC Science & Technology

    2012-02-09

    Investment (ROI) and Break Even Point ( BEP ). These metrics are essential for determining whether an initiative would be worth pursuing. Balanced...is Unlimited Energy Decision Framework Identify Inefficiencies 2. Perform Analyses 3. Examine Technology Candidates 1. Improve Energy...Unlimited Energy Decision Framework Identify Inefficiencies 2. Perform Analyses 3. Examine Technology Candidates 1. Improve Energy Efficiency 4

  13. Personalized Technologies in Chronic Gastrointestinal Disorders: Self-monitoring and Remote Sensor Technologies

    PubMed Central

    Riaz, Muhammad Safwan; Atreja, Ashish

    2016-01-01

    With increased access to high-speed Internet and smartphone devices, patients have started to use mobile applications (apps) for various health needs. These mobile apps are now increasingly used in integration with telemedicine and wearables to support fitness, health education, symptom tracking, and collaborative disease management and care coordination. More recently, evidence (especially around remote patient monitoring) has started to build in some chronic diseases, and some of the digital health technologies have received approval from the Food and Drug Administration. With the changing healthcare landscape and push for value-based care, adoption of these digital health initiatives among providers is bound to increase. Although so far there is a dearth of published evidence about effectiveness of these apps in gastroenterology care, there are ongoing trials to determine whether remote patient monitoring can lead to improvement in process metrics or outcome metrics for patients with chronic gastrointestinal diseases. PMID:27189911

  14. Comprehensive Metric Education Project: Implementing Metrics at a District Level Administrative Guide.

    ERIC Educational Resources Information Center

    Borelli, Michael L.

    This document details the administrative issues associated with guiding a school district through its metrication efforts. Issues regarding staff development, curriculum development, and the acquisition of instructional resources are considered. Alternative solutions are offered. Finally, an overall implementation strategy is discussed with…

  15. Metric Education; A Position Paper for Vocational, Technical and Adult Education.

    ERIC Educational Resources Information Center

    Cooper, Gloria S.; And Others

    Part of an Office of Education three-year project on metric education, the position paper is intended to alert and prepare teachers, curriculum developers, and administrators in vocational, technical, and adult education to the change over to the metric system. The five chapters cover issues in metric education, what the metric system is all…

  16. Space Launch System Spacecraft/Payloads Integration and Evolution Office Advanced Development FY 2014 Annual Report

    NASA Technical Reports Server (NTRS)

    Crumbly, C. M.; Bickley, F. P.; Hueter, U.

    2015-01-01

    The Advanced Development Office (ADO), part of the Space Launch System (SLS) program, provides SLS with the advanced development needed to evolve the vehicle from an initial Block 1 payload capability of 70 metric tons (t) to an eventual capability Block 2 of 130 t, with intermediary evolution options possible. ADO takes existing technologies and matures them to the point that insertion into the mainline program minimizes risk. The ADO portfolio of tasks covers a broad range of technical developmental activities. The ADO portfolio supports the development of advanced boosters, upper stages, and other advanced development activities benefiting the SLS program. A total of 36 separate tasks were funded by ADO in FY 2014.

  17. Attack-Resistant Trust Metrics

    NASA Astrophysics Data System (ADS)

    Levien, Raph

    The Internet is an amazingly powerful tool for connecting people together, unmatched in human history. Yet, with that power comes great potential for spam and abuse. Trust metrics are an attempt to compute the set of which people are trustworthy and which are likely attackers. This chapter presents two specific trust metrics developed and deployed on the Advogato Website, which is a community blog for free software developers. This real-world experience demonstrates that the trust metrics fulfilled their goals, but that for good results, it is important to match the assumptions of the abstract trust metric computation to the real-world implementation.

  18. Developing image processing meta-algorithms with data mining of multiple metrics.

    PubMed

    Leung, Kelvin; Cunha, Alexandre; Toga, A W; Parker, D Stott

    2014-01-01

    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation.

  19. Proceedings of the 66th National Conference on Weights and Measures, 1981

    NASA Astrophysics Data System (ADS)

    Wollin, H. F.; Barbrow, L. E.; Heffernan, A. P.

    1981-12-01

    Major issues discussed included measurement science education, enforcement uniformly, national type approval, inch pound and metric labeling provisions, new design and performance requirements for weighing and measuring technology, metric conversion of retail gasoline dispensers, weights and measures program evaluation studies of model State laws and regulations and their adoption by citation or other means by State and local jurisdictions, and report of States conducting grain moisture meter testing programs.

  20. Lunar Station: The Next Logical Step in Space Development

    NASA Technical Reports Server (NTRS)

    Pittman, Robert Bruce; Harper, Lynn; Newfield, Mark; Rasky, Daniel J.

    2014-01-01

    The International Space Station (ISS) is the product of the efforts of sixteen nations over the course of several decades. It is now complete, operational, and has been continuously occupied since November of 20001. Since then the ISS has been carrying out a wide variety of research and technology development experiments, and starting to produce some pleasantly startling results. The ISS has a mass of 420 metric tons, supports a crew of six with a yearly resupply requirement of around 30 metric tons, within a pressurized volume of 916 cubic meters, and a habitable volume of 388 cubic meters. Its solar arrays produce up to 84 kilowatts of power. In the course of developing the ISS, many lessons were learned and much valuable expertise was gained. Where do we go from here? The ISS offers an existence proof of the feasibility of sustained human occupation and operations in space over decades. It also demonstrates the ability of many countries to work collaboratively on a very complex and expensive project in space over an extended period of time to achieve a common goal. By harvesting best practices and lessons learned, the ISS can also serve as a useful model for exploring architectures for beyond low-­- earth-­-orbit (LEO) space development. This paper will explore the concept and feasibility for a Lunar Station. The Station concept can be implemented by either putting the equivalent capability of the ISS down on the surface of the Moon, or by developing the required capabilities through a combination of delivered materials and equipment and in situ resource utilization (ISRU). Scenarios that leverage existing technologies and capabilities as well as capabilities that are under development and are expected to be available within the next 3-­5 years, will be examined. This paper will explore how best practices and expertise gained from developing and operating the ISS and other relevant programs can be applied to effectively developing Lunar Station.

  1. Development of a Multidisciplinary Approach to Access Sustainability

    EPA Science Inventory

    There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute the metrics. Moreover, individual metrics do not capture all aspects of a system that are relevan...

  2. Flicker

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2013-03-01

    Solid-state lighting program technology fact sheet that discusses flicker metrics, contributing factors, and consequences in addition to comparing the flicker attributes of a sample of conventional and LED sources.

  3. Evaluative Usage-Based Metrics for the Selection of E-Journals.

    ERIC Educational Resources Information Center

    Hahn, Karla L.; Faulkner, Lila A.

    2002-01-01

    Explores electronic journal usage statistics and develops three metrics and three benchmarks based on those metrics. Topics include earlier work that assessed the value of print journals and was modified for the electronic format; the evaluation of potential purchases; and implications for standards development, including the need for content…

  4. The Impact of Postgraduate Health Technology Innovation Training: Outcomes of the Stanford Biodesign Fellowship.

    PubMed

    Wall, James; Hellman, Eva; Denend, Lyn; Rait, Douglas; Venook, Ross; Lucian, Linda; Azagury, Dan; Yock, Paul G; Brinton, Todd J

    2017-05-01

    Stanford Biodesign launched its Innovation Fellowship in 2001 as a first-of-its kind postgraduate training experience for teaching biomedical technology innovators a need-driven process for developing medical technologies and delivering them to patients. Since then, many design-oriented educational programs have been initiated, yet the impact of this type of training remains poorly understood. This study measures the career focus, leadership trajectory, and productivity of 114 Biodesign Innovation Fellowship alumni based on survey data and public career information. It also compares alumni on certain publicly available metrics to finalists interviewed but not selected. Overall, 60% of alumni are employed in health technology in contrast to 35% of finalists interviewed but not selected. On leadership, 72% of alumni hold managerial or higher positions compared to 48% of the finalist group. A total of 67% of alumni reported that the fellowship had been "extremely beneficial" on their careers. As a measure of technology translation, more than 440,000 patients have been reached with technologies developed directly out of the Biodesign Innovation Fellowship, with another 1,000,000+ aided by solutions initiated by alumni after their training. This study suggests a positive impact of the fellowship program on the career focus, leadership, and productivity of its alumni.

  5. Quantifying and Mapping Global Data Poverty

    PubMed Central

    2015-01-01

    Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI). The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this ‘proof of concept’ study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction. PMID:26560884

  6. The performance measurement manifesto.

    PubMed

    Eccles, R G

    1991-01-01

    The leading indicators of business performance cannot be found in financial data alone. Quality, customer satisfaction, innovation, market share--metrics like these often reflect a company's economic condition and growth prospects better than its reported earnings do. Depending on an accounting department to reveal a company's future will leave it hopelessly mired in the past. More and more managers are changing their company's performance measurement systems to track nonfinancial measures and reinforce new competitive strategies. Five activities are essential: developing an information architecture; putting the technology in place to support this architecture; aligning bonuses and other incentives with the new system; drawing on outside resources; and designing an internal process to ensure the other four activities occur. New technologies and more sophisticated databases have made the change to nonfinancial performance measurement systems possible and economically feasible. Industry and trade associations, consulting firms, and public accounting firms that already have well-developed methods for assessing market share and other performance metrics can add to the revolution's momentum--as well as profit from the business opportunities it presents. Every company will have its own key measures and distinctive process for implementing the change. But making it happen will always require careful preparation, perseverance, and the conviction of the CEO that it must be carried through. When one leading company can demonstrate the long-term advantage of its superior performance on quality or innovation or any other nonfinancial measure, it will change the rules for all its rivals forever.

  7. Development of quality metrics for ambulatory pediatric cardiology: Infection prevention.

    PubMed

    Johnson, Jonathan N; Barrett, Cindy S; Franklin, Wayne H; Graham, Eric M; Halnon, Nancy J; Hattendorf, Brandy A; Krawczeski, Catherine D; McGovern, James J; O'Connor, Matthew J; Schultz, Amy H; Vinocur, Jeffrey M; Chowdhury, Devyani; Anderson, Jeffrey B

    2017-12-01

    In 2012, the American College of Cardiology's (ACC) Adult Congenital and Pediatric Cardiology Council established a program to develop quality metrics to guide ambulatory practices for pediatric cardiology. The council chose five areas on which to focus their efforts; chest pain, Kawasaki Disease, tetralogy of Fallot, transposition of the great arteries after arterial switch, and infection prevention. Here, we sought to describe the process, evaluation, and results of the Infection Prevention Committee's metric design process. The infection prevention metrics team consisted of 12 members from 11 institutions in North America. The group agreed to work on specific infection prevention topics including antibiotic prophylaxis for endocarditis, rheumatic fever, and asplenia/hyposplenism; influenza vaccination and respiratory syncytial virus prophylaxis (palivizumab); preoperative methods to reduce intraoperative infections; vaccinations after cardiopulmonary bypass; hand hygiene; and testing to identify splenic function in patients with heterotaxy. An extensive literature review was performed. When available, previously published guidelines were used fully in determining metrics. The committee chose eight metrics to submit to the ACC Quality Metric Expert Panel for review. Ultimately, metrics regarding hand hygiene and influenza vaccination recommendation for patients did not pass the RAND analysis. Both endocarditis prophylaxis metrics and the RSV/palivizumab metric passed the RAND analysis but fell out during the open comment period. Three metrics passed all analyses, including those for antibiotic prophylaxis in patients with heterotaxy/asplenia, for influenza vaccination compliance in healthcare personnel, and for adherence to recommended regimens of secondary prevention of rheumatic fever. The lack of convincing data to guide quality improvement initiatives in pediatric cardiology is widespread, particularly in infection prevention. Despite this, three metrics were able to be developed for use in the ACC's quality efforts for ambulatory practice. © 2017 Wiley Periodicals, Inc.

  8. Food safety objective approach for controlling Clostridium botulinum growth and toxin production in commercially sterile foods.

    PubMed

    Anderson, N M; Larkin, J W; Cole, M B; Skinner, G E; Whiting, R C; Gorris, L G M; Rodriguez, A; Buchanan, R; Stewart, C M; Hanlin, J H; Keener, L; Hall, P A

    2011-11-01

    As existing technologies are refined and novel microbial inactivation technologies are developed, there is a growing need for a metric that can be used to judge equivalent levels of hazard control stringency to ensure food safety of commercially sterile foods. A food safety objective (FSO) is an output-oriented metric that designates the maximum level of a hazard (e.g., the pathogenic microorganism or toxin) tolerated in a food at the end of the food supply chain at the moment of consumption without specifying by which measures the hazard level is controlled. Using a risk-based approach, when the total outcome of controlling initial levels (H(0)), reducing levels (ΣR), and preventing an increase in levels (ΣI) is less than or equal to the target FSO, the product is considered safe. A cross-disciplinary international consortium of specialists from industry, academia, and government was organized with the objective of developing a document to illustrate the FSO approach for controlling Clostridium botulinum toxin in commercially sterile foods. This article outlines the general principles of an FSO risk management framework for controlling C. botulinum growth and toxin production in commercially sterile foods. Topics include historical approaches to establishing commercial sterility; a perspective on the establishment of an appropriate target FSO; a discussion of control of initial levels, reduction of levels, and prevention of an increase in levels of the hazard; and deterministic and stochastic examples that illustrate the impact that various control measure combinations have on the safety of well-established commercially sterile products and the ways in which variability all levels of control can heavily influence estimates in the FSO risk management framework. This risk-based framework should encourage development of innovative technologies that result in microbial safety levels equivalent to those achieved with traditional processing methods.

  9. Toward objective image quality metrics: the AIC Eval Program of the JPEG

    NASA Astrophysics Data System (ADS)

    Richter, Thomas; Larabi, Chaker

    2008-08-01

    Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.

  10. 75 FR 51272 - Proposed Collection; Comment Request; STAR METRICS-Science and Technology in America's...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-19

    ... data collection projects, the Office of Science Policy Analysis (OSPA), the National Institutes of..., Health Science Policy Analyst, Office of Science and Technology Policy, OSP, OD; NIH, Building 1, Room... this publication. Dated: August 12, 2010. Lynn D. Hudson, Director, Office of Science Policy Analysis...

  11. Toward Precision and Reproducibility of Diffusion Tensor Imaging: A Multicenter Diffusion Phantom and Traveling Volunteer Study.

    PubMed

    Palacios, E M; Martin, A J; Boss, M A; Ezekiel, F; Chang, Y S; Yuh, E L; Vassar, M J; Schnyer, D M; MacDonald, C L; Crawford, K L; Irimia, A; Toga, A W; Mukherjee, P

    2017-03-01

    Precision medicine is an approach to disease diagnosis, treatment, and prevention that relies on quantitative biomarkers that minimize the variability of individual patient measurements. The aim of this study was to assess the intersite variability after harmonization of a high-angular-resolution 3T diffusion tensor imaging protocol across 13 scanners at the 11 academic medical centers participating in the Transforming Research and Clinical Knowledge in Traumatic Brain Injury multisite study. Diffusion MR imaging was acquired from a novel isotropic diffusion phantom developed at the National Institute of Standards and Technology and from the brain of a traveling volunteer on thirteen 3T MR imaging scanners representing 3 major vendors (GE Healthcare, Philips Healthcare, and Siemens). Means of the DTI parameters and their coefficients of variation across scanners were calculated for each DTI metric and white matter tract. For the National Institute of Standards and Technology diffusion phantom, the coefficients of variation of the apparent diffusion coefficient across the 13 scanners was <3.8% for a range of diffusivities from 0.4 to 1.1 × 10 -6 mm 2 /s. For the volunteer, the coefficients of variations across scanners of the 4 primary DTI metrics, each averaged over the entire white matter skeleton, were all <5%. In individual white matter tracts, large central pathways showed good reproducibility with the coefficients of variation consistently below 5%. However, smaller tracts showed more variability, with the coefficients of variation of some DTI metrics reaching 10%. The results suggest the feasibility of standardizing DTI across 3T scanners from different MR imaging vendors in a large-scale neuroimaging research study. © 2017 by American Journal of Neuroradiology.

  12. How might renewable energy technologies fit in the food-water-energy nexus?

    NASA Astrophysics Data System (ADS)

    Newmark, R. L.; Macknick, J.; Heath, G.; Ong, S.; Denholm, P.; Margolis, R.; Roberts, B.

    2011-12-01

    Feeding the growing population in the U.S. will require additional land for crop and livestock production. Similarly, a growing population will require additional sources of energy. Renewable energy is likely to play an increased role in meeting the new demands of electricity consumers. Renewable energy technologies can differ from conventional technologies in their operation and their siting locations. Many renewable energy technologies have a lower energy density than conventional technologies and can also have large land use requirements. Much of the prime area suitable for renewable energy development in the U.S. has historically been used for agricultural production, and there is some concern that renewable energy installations could displace land currently producing food crops. In addition to requiring vast expanses of land, both agriculture and renewable energy can require water. The agriculture and energy sectors are responsible for the majority of water withdrawals in the U.S. Increases in both agricultural and energy demand can lead to increases in water demands, depending on crop management and energy technologies employed. Water is utilized in the energy industry primarily for power plant cooling, but it is also required for steam cycle processes and cleaning. Recent characterizations of water use by different energy and cooling system technologies demonstrate the choice of fuel and cooling system technologies can greatly impact the withdrawals and the consumptive use of water in the energy industry. While some renewable and conventional technology configurations can utilize more water per unit of land than irrigation-grown crops, other renewable technology configurations utilize no water during operations and could lead to reduced stress on water resources. Additionally, co-locating agriculture and renewable energy production is also possible with many renewable technologies, avoiding many concerns about reductions in domestic food production. Various metrics exist for defining land use impacts of energy technologies, with little consensus on how much total land is impacted or is necessary. Here we characterize the land use requirements of energy technologies by comparing various metrics from different studies, providing ranges of the potential land impact from alternative energy scenarios. Land use requirements for energy needs under these scenarios are compared with projected land use requirements for agriculture to support a growing population. The water implications of various energy and food scenarios are analyzed to provide insights into potential regional impacts or conflicts between sectors.

  13. A Teacher's Guide to Metrics. A Series of In-Service Booklets Designed for Adult Educators.

    ERIC Educational Resources Information Center

    Wendel, Robert, Ed.; And Others

    This series of seven booklets is designed to train teachers of adults in metrication, as a prerequisite to offering metrics in adult basic education and general educational development programs. The seven booklets provide a guide representing an integration of metric teaching methods and metric materials to place the adult in an active learning…

  14. Research Overview and Analysis.

    DTIC Science & Technology

    1982-04-01

    they have the infrastructure in place to respond to a signi- ficant increase in demand for the development of metric standards should such a demand...Conversion of Standards: The Views of Nine Selected Major Standards Development Bodies, U.S. Metric Board, in press (1982). A Study of Metric Conversion...Reports Developed under Contract AA-80-SAC-XB604 4--Office of Public Awareness and Education 5--Three Reports Developed under Contract AA-80-SAC-X8602

  15. Development of a multidisciplinary approach to assess regional sustainability

    EPA Science Inventory

    There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute the metrics. Moreover, individual metrics do not capture all aspects of a system that are relev...

  16. Software metrics: The key to quality software on the NCC project

    NASA Technical Reports Server (NTRS)

    Burns, Patricia J.

    1993-01-01

    Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.

  17. A Validation of Object-Oriented Design Metrics as Quality Indicators

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio

    1997-01-01

    This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.

  18. Metric analysis and data validation across FORTRAN projects

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.; Phillips, Tsai-Yun

    1983-01-01

    The desire to predict the effort in developing or explaining the quality of software has led to the proposal of several metrics. As a step toward validating these metrics, the Software Engineering Laboratory (SEL) has analyzed the software science metrics, cyclomatic complexity, and various standard program measures for their relation to effort (including design through acceptance testing), development errors (both discrete and weighted according to the amount of time to locate and fix), and one another. The data investigated are collected from a project FORTRAN environment and examined across several projects at once, within individual projects and by reporting accuracy checks demonstrating the need to validate a database. When the data comes from individual programmers or certain validated projects, the metrics' correlations with actual effort seem to be strongest. For modules developed entirely by individual programmers, the validity ratios induce a statistically significant ordering of several of the metrics' correlations. When comparing the strongest correlations, neither software science's E metric cyclomatic complexity not source lines of code appears to relate convincingly better with effort than the others.

  19. A Validation of Object-Oriented Design Metrics

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Briand, Lionel; Melo, Walcelio L.

    1995-01-01

    This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (00) design metrics introduced by [Chidamber and Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Lieand Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these 00 metrics are discussed and suggestions for improvement are provided. Several of Chidamber and Kemerer's 00 metrics appear to be adequate to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.

  20. The Extended HANDS Characterization and Analysis of Metric Biases

    NASA Astrophysics Data System (ADS)

    Kelecy, T.; Knox, R.; Cognion, R.

    The Extended High Accuracy Network Determination System (Extended HANDS) consists of a network of low cost, high accuracy optical telescopes designed to support space surveillance and development of space object characterization technologies. Comprising off-the-shelf components, the telescopes are designed to provide sub arc-second astrometric accuracy. The design and analysis team are in the process of characterizing the system through development of an error allocation tree whose assessment is supported by simulation, data analysis, and calibration tests. The metric calibration process has revealed 1-2 arc-second biases in the right ascension and declination measurements of reference satellite position, and these have been observed to have fairly distinct characteristics that appear to have some dependence on orbit geometry and tracking rates. The work presented here outlines error models developed to aid in development of the system error budget, and examines characteristic errors (biases, time dependence, etc.) that might be present in each of the relevant system elements used in the data collection and processing, including the metric calibration processing. The relevant reference frames are identified, and include the sensor (CCD camera) reference frame, Earth-fixed topocentric frame, topocentric inertial reference frame, and the geocentric inertial reference frame. The errors modeled in each of these reference frames, when mapped into the topocentric inertial measurement frame, reveal how errors might manifest themselves through the calibration process. The error analysis results that are presented use satellite-sensor geometries taken from periods where actual measurements were collected, and reveal how modeled errors manifest themselves over those specific time periods. These results are compared to the real calibration metric data (right ascension and declination residuals), and sources of the bias are hypothesized. In turn, the actual right ascension and declination calibration residuals are also mapped to other relevant reference frames in an attempt to validate the source of the bias errors. These results will serve as the basis for more focused investigation into specific components embedded in the system and system processes that might contain the source of the observed biases.

  1. Technological advances for studying human behavior

    NASA Technical Reports Server (NTRS)

    Roske-Hofstrand, Renate J.

    1990-01-01

    Technological advances for studying human behavior are noted in viewgraph form. It is asserted that performance-aiding systems are proliferating without a fundamental understanding of how they would interact with the humans who must control them. Two views of automation research, the hardware view and the human-centered view, are listed. Other viewgraphs give information on vital elements for human-centered research, a continuum of the research process, available technologies, new technologies for persistent problems, a sample research infrastructure, the need for metrics, and examples of data-link technology.

  2. Systematic study of source mask optimization and verification flows

    NASA Astrophysics Data System (ADS)

    Ben, Yu; Latypov, Azat; Chua, Gek Soon; Zou, Yi

    2012-06-01

    Source mask optimization (SMO) emerged as powerful resolution enhancement technique (RET) for advanced technology nodes. However, there is a plethora of flow and verification metrics in the field, confounding the end user of the technique. Systemic study of different flows and the possible unification thereof is missing. This contribution is intended to reveal the pros and cons of different SMO approaches and verification metrics, understand the commonality and difference, and provide a generic guideline for RET selection via SMO. The paper discusses 3 different type of variations commonly arise in SMO, namely pattern preparation & selection, availability of relevant OPC recipe for freeform source and finally the metrics used in source verification. Several pattern selection algorithms are compared and advantages of systematic pattern selection algorithms are discussed. In the absence of a full resist model for SMO, alternative SMO flow without full resist model is reviewed. Preferred verification flow with quality metrics of DOF and MEEF is examined.

  3. Crew and Thermal Systems Strategic Communications Initiatives in Support of NASA's Strategic Goals

    NASA Technical Reports Server (NTRS)

    Paul, Heather L.; Lamberth, Erika Guillory; Jennings, Mallory A.

    2012-01-01

    NASA has defined strategic goals to invest in next-generation technologies and innovations, inspire students to become the future leaders of space exploration, and expand partnerships with industry and academia around the world. The Crew and Thermal Systems Division (CTSD) at the NASA Johnson Space Center actively supports these NASA initiatives. In July 2011, CTSD created a strategic communications team to communicate CTSD capabilities, technologies, and personnel to external technical audiences for business development and collaborative initiatives, and to students, educators, and the general public for education and public outreach efforts. This paper summarizes the CTSD Strategic Communications efforts and metrics through the first half of fiscal year 2012 with projections for end of fiscal year data.

  4. Crew and Thermal Systems Strategic Communications Initiatives in Support of NASA's Strategic Goals

    NASA Technical Reports Server (NTRS)

    Paul, Heather L.

    2012-01-01

    NASA has defined strategic goals to invest in next-generation technologies and innovations, to inspire students to become the future leaders of space exploration, and to expand partnerships with industry and academia around the world. The Crew and Thermal Systems Division (CTSD) at the NASA Johnson Space Center actively supports these NASA initiatives. In July 2011, CTSD created a strategic communications team to communicate CTSD capabilities, technologies, and personnel to internal NASA and external technical audiences for business development and collaborative initiatives, and to students, educators, and the general public for education and public outreach efforts. This paper summarizes the CTSD Strategic Communications efforts and metrics through the first nine months of fiscal year 2012.

  5. New technologies for solar energy silicon - Cost analysis of BCL process

    NASA Technical Reports Server (NTRS)

    Yaws, C. L.; Li, K.-Y.; Fang, C. S.; Lutwack, R.; Hsu, G.; Leven, H.

    1980-01-01

    New technologies for producing polysilicon are being developed to provide lower cost material for solar cells which convert sunlight into electricity. This article presents results for the BCL Process, which produces the solar-cell silicon by reduction of silicon tetrachloride with zinc vapor. Cost, sensitivity, and profitability analysis results are presented based on a preliminary process design of a plant to produce 1000 metric tons/year of silicon by the BCL Process. Profitability analysis indicates a sales price of $12.1-19.4 per kg of silicon (1980 dollars) at a 0-25 per cent DCF rate of return on investment after taxes. These results indicate good potential for meeting the goal of providing lower cost material for silicon solar cells.

  6. Developing Image Processing Meta-Algorithms with Data Mining of Multiple Metrics

    PubMed Central

    Cunha, Alexandre; Toga, A. W.; Parker, D. Stott

    2014-01-01

    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation. PMID:24653748

  7. Positive technology: using interactive technologies to promote positive functioning.

    PubMed

    Riva, Giuseppe; Baños, Rosa M; Botella, Cristina; Wiederhold, Brenda K; Gaggioli, Andrea

    2012-02-01

    It is generally assumed that technology assists individuals in improving the quality of their lives. However, the impact of new technologies and media on well-being and positive functioning is still somewhat controversial. In this paper, we contend that the quality of experience should become the guiding principle in the design and development of new technologies, as well as a primary metric for the evaluation of their applications. The emerging discipline of Positive Psychology provides a useful framework to address this challenge. Positive Psychology is the scientific study of optimal human functioning and flourishing. Instead of drawing on a "disease model" of human behavior, it focuses on factors that enable individuals and communities to thrive and build the best in life. In this paper, we propose the "Positive Technology" approach--the scientific and applied approach to the use of technology for improving the quality of our personal experience through its structuring, augmentation, and/or replacement--as a way of framing a suitable object of study in the field of cyberpsychology and human-computer interaction. Specifically, we suggest that it is possible to use technology to influence three specific features of our experience--affective quality, engagement/actualization, and connectedness--that serve to promote adaptive behaviors and positive functioning. In this framework, positive technologies are classified according to their effects on a specific feature of personal experience. Moreover, for each level, we have identified critical variables that can be manipulated to guide the design and development of positive technologies.

  8. Evaluation of an Integrated Framework for Biodiversity with a New Metric for Functional Dispersion

    PubMed Central

    Presley, Steven J.; Scheiner, Samuel M.; Willig, Michael R.

    2014-01-01

    Growing interest in understanding ecological patterns from phylogenetic and functional perspectives has driven the development of metrics that capture variation in evolutionary histories or ecological functions of species. Recently, an integrated framework based on Hill numbers was developed that measures three dimensions of biodiversity based on abundance, phylogeny and function of species. This framework is highly flexible, allowing comparison of those diversity dimensions, including different aspects of a single dimension and their integration into a single measure. The behavior of those metrics with regard to variation in data structure has not been explored in detail, yet is critical for ensuring an appropriate match between the concept and its measurement. We evaluated how each metric responds to particular data structures and developed a new metric for functional biodiversity. The phylogenetic metric is sensitive to variation in the topology of phylogenetic trees, including variation in the relative lengths of basal, internal and terminal branches. In contrast, the functional metric exhibited multiple shortcomings: (1) species that are functionally redundant contribute nothing to functional diversity and (2) a single highly distinct species causes functional diversity to approach the minimum possible value. We introduced an alternative, improved metric based on functional dispersion that solves both of these problems. In addition, the new metric exhibited more desirable behavior when based on multiple traits. PMID:25148103

  9. Evaluating software development characteristics: Assessment of software measures in the Software Engineering Laboratory. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Basili, V. R.

    1981-01-01

    Work on metrics is discussed. Factors that affect software quality are reviewed. Metrics is discussed in terms of criteria achievements, reliability, and fault tolerance. Subjective and objective metrics are distinguished. Product/process and cost/quality metrics are characterized and discussed.

  10. Aircraft operability methods applied to space launch vehicles

    NASA Astrophysics Data System (ADS)

    Young, Douglas

    1997-01-01

    The commercial space launch market requirement for low vehicle operations costs necessitates the application of methods and technologies developed and proven for complex aircraft systems. The ``building in'' of reliability and maintainability, which is applied extensively in the aircraft industry, has yet to be applied to the maximum extent possible on launch vehicles. Use of vehicle system and structural health monitoring, automated ground systems and diagnostic design methods derived from aircraft applications support the goal of achieving low cost launch vehicle operations. Transforming these operability techniques to space applications where diagnostic effectiveness has significantly different metrics is critical to the success of future launch systems. These concepts will be discussed with reference to broad launch vehicle applicability. Lessons learned and techniques used in the adaptation of these methods will be outlined drawing from recent aircraft programs and implementation on phase 1 of the X-33/RLV technology development program.

  11. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  12. Bilingual Metric Education Modules for Postsecondary and Adult Vocational Education. Final Report.

    ERIC Educational Resources Information Center

    Ellis Associates, Inc., College Park, MD.

    A project was conducted to develop three metric education modules for use with bilingual (Spanish and English) students in postsecondary and adult vocational education programs. Developed for the first section of each module, five instructional units cover basic metric concepts: (1) measuring length and finding area, (2) measuring volume, (3)…

  13. A Classification Scheme for Smart Manufacturing Systems’ Performance Metrics

    PubMed Central

    Lee, Y. Tina; Kumaraguru, Senthilkumaran; Jain, Sanjay; Robinson, Stefanie; Helu, Moneer; Hatim, Qais Y.; Rachuri, Sudarsan; Dornfeld, David; Saldana, Christopher J.; Kumara, Soundar

    2017-01-01

    This paper proposes a classification scheme for performance metrics for smart manufacturing systems. The discussion focuses on three such metrics: agility, asset utilization, and sustainability. For each of these metrics, we discuss classification themes, which we then use to develop a generalized classification scheme. In addition to the themes, we discuss a conceptual model that may form the basis for the information necessary for performance evaluations. Finally, we present future challenges in developing robust, performance-measurement systems for real-time, data-intensive enterprises. PMID:28785744

  14. A software quality model and metrics for risk assessment

    NASA Technical Reports Server (NTRS)

    Hyatt, L.; Rosenberg, L.

    1996-01-01

    A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.

  15. Aircraft System Analysis of Technology Benefits to Civil Transport Rotorcraft

    NASA Technical Reports Server (NTRS)

    Wilkerson, Joseph B.; Smith, Roger L.

    2008-01-01

    An aircraft systems analysis was conducted to evaluate the net benefits of advanced technologies on two conceptual civil transport rotorcraft, to quantify the potential of future civil rotorcraft to become operationally viable and economically competitive, with the ultimate goal of alleviating congestion in our airways, runways and terminals. These questions are three of many that must be resolved for the successful introduction of civil transport rotorcraft: 1) Can civil transport rotorcraft actually relieve current airport congestion and improve overall air traffic and passenger throughput at busy hub airports? What is that operational scenario? 2) Can advanced technology make future civil rotorcraft economically competitive in scheduled passenger transport? What are those enabling technologies? 3) What level of investment is necessary to mature the key enabling technologies? This study addresses the first two questions, and several others, by applying a systems analysis approach to a broad spectrum of potential advanced technologies at a conceptual level of design. The method was to identify those advanced technologies that showed the most promise and to quantify their benefits to the design, development, production, and operation of future civil rotorcraft. Adjustments are made to sizing data by subject matter experts to reflect the introduction of new technologies that offer improved performance, reduced weight, reduced maintenance, or reduced cost. This study used projected benefits from new, advanced technologies, generally based on research results, analysis, or small-scale test data. The technologies are identified, categorized and quantified in the report. The net benefit of selected advanced technologies is quantified for two civil transport rotorcraft concepts, a Single Main Rotor Compound (SMRC) helicopter designed for 250 ktas cruise airspeed and a Civil Tilt Rotor (CTR) designed for 350 ktas cruise airspeed. A baseline design of each concept was sized for a representative civil passenger transport mission, using current technology. Individual advanced technologies are quantified and applied to resize the aircraft, thereby quantifying the net benefit of that technology to the rotorcraft. Estimates of development cost, production cost and operating and support costs are made with a commercial cost estimating program, calibrated to Boeing products with adjustments for future civil production processes. A cost metric of cash direct operating cost per available seat-mile (DOC ASM) is used to compare the cost benefit of the technologies. The same metric is used to compare results with turboprop operating costs. Reduced engine SFC was the most advantageous advanced technology for both rotorcraft concepts. Structural weight reduction was the second most beneficial technology, followed by advanced drive systems and then by technology for rotorcraft performance. Most of the technologies evaluated in this report should apply similarly to conventional helicopters. The implicit assumption is that resources will become available to mature the technologies for fullscale production aircraft. That assumption is certainly the weak link in any forecast of future possibilities. The analysis serves the purpose of identifying which technologies offer the most potential benefit, and thus the ones that should receive the highest priority for continued development. This study directly addressed the following NASA Subsonic Rotary Wing (SRW) subtopics: SR W.4.8.I.J Establish capability for rotorcraft system analysis and SRW. 4.8.I.4 Conduct limited technology benefit assessment on baseline rotorcraft configurations.

  16. QuEST: Qualifying Environmentally Sustainable Technologies. Volume 2

    NASA Technical Reports Server (NTRS)

    Brown, Christina (Editor)

    2007-01-01

    TEERM focuses its validation efforts on technologies that have shown promise in laboratory testing, but lack testing under realistic or field environment. Mature technologies have advantages over those that are still in the developmental stage such as being more likely to be transitioned into a working environment. One way TEERM begins to evaluate the suitability of technologies is through Technology Readiness Levels (TRLs). TRLs are a systematic metric/measurement system that supports assessments of the maturity of a particular technology and the consistent comparison of maturity between different types of technology. TEERM generally works on demonstrating/validating alternatives that fall within TRLs 5-9. In instances where a mature technology does not exist for a particular Agency application, TEERM works with technology development groups and programs such as NASA's Innovative Partnerships Program (IPP). The IPP's purpose is to identify and document available technologies in light of NASA's needs, evaluate and prioritize those technologies, and reach out to find new partners. All TEERM projects involve multiple partners. Partnering reduces duplication of effort that otherwise might occur if individuals worked their problems alone. Partnering also helps reduce individual contributors' shares of the total cost of technology validation. Through collaboration and financial commitment from project stakeholders and third-party sources, it is possible to fully fund expensive demonstration/validation efforts.

  17. Usability Evaluations of a Wearable Inertial Sensing System and Quality of Movement Metrics for Stroke Survivors by Care Professionals.

    PubMed

    Klaassen, Bart; van Beijnum, Bert-Jan F; Held, Jeremia P; Reenalda, Jasper; van Meulen, Fokke B; Veltink, Peter H; Hermens, Hermie J

    2017-01-01

    Inertial motion capture systems are used in many applications such as measuring the movement quality in stroke survivors. The absence of clinical effectiveness and usability evidence in these assistive technologies into rehabilitation has delayed the transition of research into clinical practice. Recently, a new inertial motion capture system was developed in a project, called INTERACTION, to objectively measure the quality of movement (QoM) in stroke survivors during daily-life activity. With INTERACTION, we are to be able to investigate into what happens with patients after discharge from the hospital. Resulting QoM metrics, where a metric is defined as a measure of some property, are subsequently presented to care professionals. Metrics include for example: reaching distance, walking speed, and hand distribution plots. The latter shows a density plot of the hand position in the transversal plane. The objective of this study is to investigate the opinions of care professionals in using these metrics obtained from INTERACTION and its usability. By means of a semi-structured interview, guided by a presentation, presenting two patient reports. Each report includes several QoM metric (like reaching distance, hand position density plots, shoulder abduction) results obtained during daily-life measurements and in clinic and were evaluated by care professionals not related to the project. The results were compared with care professionals involved within the INTERACTION project. Furthermore, two questionnaires (5-point Likert and open questionnaire) were handed over to rate the usability of the metrics and to investigate if they would like such a system in their clinic. Eleven interviews were conducted, where each interview included either two or three care professionals as a group, in Switzerland and The Netherlands. Evaluation of the case reports (CRs) by participants and INTERACTION members showed a high correlation for both lower and upper extremity metrics. Participants were most in favor of hand distribution plots during daily-life activities. All participants mentioned that visualizing QoM of stroke survivors over time during daily-life activities has more possibilities compared to current clinical assessments. They also mentioned that these metrics could be important for self-evaluation of stroke survivors. The results showed that most participants were able to understand the metrics presented in the CRs. For a few metrics, it remained difficult to assess the underlying cause of the QoM. Hence, a combination of metrics is needed to get a better insight of the patient. Furthermore, it remains important to report the state (e.g., how the patient feels), its surroundings (outside, inside the house, on a slippery surface), and detail of specific activities (does the patient grasps a piece of paper or a heavy cooking pan but also dual tasks). Altogether, it remains a questions how to determine what the patient is doing and where the patient is doing his or her activities.

  18. Integrating Human Factors Engineering and Information Processing Approaches to Facilitate Evaluations in Criminal Justice Technology Research.

    PubMed

    Salvemini, Anthony V; Piza, Eric L; Carter, Jeremy G; Grommon, Eric L; Merritt, Nancy

    2015-06-01

    Evaluations are routinely conducted by government agencies and research organizations to assess the effectiveness of technology in criminal justice. Interdisciplinary research methods are salient to this effort. Technology evaluations are faced with a number of challenges including (1) the need to facilitate effective communication between social science researchers, technology specialists, and practitioners, (2) the need to better understand procedural and contextual aspects of a given technology, and (3) the need to generate findings that can be readily used for decision making and policy recommendations. Process and outcome evaluations of technology can be enhanced by integrating concepts from human factors engineering and information processing. This systemic approach, which focuses on the interaction between humans, technology, and information, enables researchers to better assess how a given technology is used in practice. Examples are drawn from complex technologies currently deployed within the criminal justice system where traditional evaluations have primarily focused on outcome metrics. Although this evidence-based approach has significant value, it is vulnerable to fully account for human and structural complexities that compose technology operations. Guiding principles for technology evaluations are described for identifying and defining key study metrics, facilitating communication within an interdisciplinary research team, and for understanding the interaction between users, technology, and information. The approach posited here can also enable researchers to better assess factors that may facilitate or degrade the operational impact of the technology and answer fundamental questions concerning whether the technology works as intended, at what level, and cost. © The Author(s) 2015.

  19. Metrication: What Can HRD Specialists Do?

    ERIC Educational Resources Information Center

    Short, Larry G.

    1978-01-01

    First discusses some features of the Metric Conversion Act which established federal support of metric system usage in the United States. Then covers the following: what HRD (Human Resources Development) specialists can do to assist their company managers during the conversion process; metric training strategies; and how to prepare for metric…

  20. Defining Sustainability Metric Targets in an Institutional Setting

    ERIC Educational Resources Information Center

    Rauch, Jason N.; Newman, Julie

    2009-01-01

    Purpose: The purpose of this paper is to expand on the development of university and college sustainability metrics by implementing an adaptable metric target strategy. Design/methodology/approach: A combined qualitative and quantitative methodology is derived that both defines what a sustainable metric target might be and describes the path a…

  1. Characterizing the impact of spatiotemporal variations in stormwater infrastructure on hydrologic conditions

    NASA Astrophysics Data System (ADS)

    Jovanovic, T.; Mejia, A.; Hale, R. L.; Gironas, J. A.

    2015-12-01

    Urban stormwater infrastructure design has evolved in time, reflecting changes in stormwater policy and regulations, and in engineering design. This evolution makes urban basins heterogeneous socio-ecological-technological systems. We hypothesize that this heterogeneity creates unique impact trajectories in time and impact hotspots in space within and across cities. To explore this, we develop and implement a network hydro-engineering modeling framework based on high-resolution digital elevation and stormwater infrastructure data. The framework also accounts for climatic, soils, land use, and vegetation conditions in an urban basin, thus making it useful to study the impacts of stormwater infrastructure across cities. Here, to evaluate the framework, we apply it to urban basins in the metropolitan areas of Phoenix, Arizona. We use it to estimate different metrics to characterize the storm-event hydrologic response. We estimate both traditional metrics (e.g., peak flow, time to peak, and runoff volume) as well as new metrics (e.g., basin-scale dispersion mechanisms). We also use the dispersion mechanisms to assess the scaling characteristics of urban basins. Ultimately, we find that the proposed framework can be used to understand and characterize the impacts associated with stormwater infrastructure on hydrologic conditions within a basin. Additionally, we find that the scaling approach helps in synthesizing information but it requires further validation using additional urban basins.

  2. Benchmarking homogenization algorithms for monthly data

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.

    2013-09-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  3. Quality and efficiency successes leveraging IT and new processes.

    PubMed

    Chaiken, Barry P; Christian, Charles E; Johnson, Liz

    2007-01-01

    Today, healthcare annually invests billions of dollars in information technology, including clinical systems, electronic medical records and interoperability platforms. While continued investment and parallel development of standards are critical to secure exponential benefits from clinical information technology, intelligent and creative redesign of processes through path innovation is necessary to deliver meaningful value. Reports from two organizations included in this report review the steps taken to reinvent clinical processes that best leverage information technology to deliver safer and more efficient care. Good Samaritan Hospital, Vincennes, Indiana, implemented electronic charting, point-of-care bar coding of medications prior to administration, and integrated clinical documentation for nursing, laboratory, radiology and pharmacy. Tenet Healthcare, during its implementation and deployment of multiple clinical systems across several hospitals, focused on planning that included team-based process redesign. In addition, Tenet constructed valuable and measurable metrics that link outcomes with its strategic goals.

  4. Photogrammetry for rapid prototyping: development of noncontact 3D reconstruction technologies

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.

    2002-04-01

    An important stage of rapid prototyping technology is generating computer 3D model of an object to be reproduced. Wide variety of techniques for 3D model generation exists beginning with manual 3D models generation and finishing with full-automated reverse engineering system. The progress in CCD sensors and computers provides the background for integration of photogrammetry as an accurate 3D data source with CAD/CAM. The paper presents the results of developing photogrammetric methods for non-contact spatial coordinates measurements and generation of computer 3D model of real objects. The technology is based on object convergent images processing for calculating its 3D coordinates and surface reconstruction. The hardware used for spatial coordinates measurements is based on PC as central processing unit and video camera as image acquisition device. The original software for Windows 9X realizes the complete technology of 3D reconstruction for rapid input of geometry data in CAD/CAM systems. Technical characteristics of developed systems are given along with the results of applying for various tasks of 3D reconstruction. The paper describes the techniques used for non-contact measurements and the methods providing metric characteristics of reconstructed 3D model. Also the results of system application for 3D reconstruction of complex industrial objects are presented.

  5. A Program in Air Transportation Technology (Joint University Program)

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.

    1996-01-01

    The Joint University Program on Air Transportation Technology was conducted at Princeton University from 1971 to 1995. Our vision was to further understanding of the design and operation of transport aircraft, of the effects of atmospheric environment on aircraft flight, and of the development and utilization of the National Airspace System. As an adjunct, the program emphasized the independent research of both graduate and undergraduate students. Recent principal goals were to develop and verify new methods for design and analysis of intelligent flight control systems, aircraft guidance logic for recovery from wake vortex encounter, and robust flight control systems. Our research scope subsumed problems associated with multidisciplinary aircraft design synthesis and analysis based on flight physics, providing a theoretical basis for developing innovative control concepts that enhance aircraft performance and safety. Our research focus was of direct interest not only to NASA but to manufacturers of aircraft and their associated systems. Our approach, metrics, and future directions described in the remainder of the report.

  6. Process Design and Economics for the Conversion of Lignocellulosic Biomass to Hydrocarbon Fuels. Thermochemical Research Pathways with In Situ and Ex Situ Upgrading of Fast Pyrolysis Vapors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dutta, Abhijit; Sahir, Asad; Tan, Eric

    This report was developed as part of the U.S. Department of Energy’s Bioenergy Technologies Office’s efforts to enable the development of technologies for the production of infrastructurecompatible, cost-competitive liquid hydrocarbon fuels from biomass. Specifically, this report details two conceptual designs based on projected product yields and quality improvements via catalyst development and process integration. It is expected that these research improvements will be made within the 2022 timeframe. The two conversion pathways detailed are (1) in situ and (2) ex situ upgrading of vapors produced from the fast pyrolysis of biomass. While the base case conceptual designs and underlying assumptionsmore » outline performance metrics for feasibility, it should be noted that these are only two of many other possibilities in this area of research. Other promising process design options emerging from the research will be considered for future techno-economic analysis.« less

  7. Biosensors in the small scale: methods and technology trends.

    PubMed

    Senveli, Sukru U; Tigli, Onur

    2013-03-01

    This study presents a review on biosensors with an emphasis on recent developments in the field. A brief history accompanied by a detailed description of the biosensor concepts is followed by rising trends observed in contemporary micro- and nanoscale biosensors. Performance metrics to quantify and compare different detection mechanisms are presented. A comprehensive analysis on various types and subtypes of biosensors are given. The fields of interest within the scope of this review are label-free electrical, mechanical and optical biosensors as well as other emerging and popular technologies. Especially, the latter half of the last decade is reviewed for the types, methods and results of the most prominently researched detection mechanisms. Tables are provided for comparison of various competing technologies in the literature. The conclusion part summarises the noteworthy advantages and disadvantages of all biosensors reviewed in this study. Furthermore, future directions that the micro- and nanoscale biosensing technologies are expected to take are provided along with the immediate outlook.

  8. Automated Metrics in a Virtual-Reality Myringotomy Simulator: Development and Construct Validity.

    PubMed

    Huang, Caiwen; Cheng, Horace; Bureau, Yves; Ladak, Hanif M; Agrawal, Sumit K

    2018-06-15

    The objectives of this study were: 1) to develop and implement a set of automated performance metrics into the Western myringotomy simulator, and 2) to establish construct validity. Prospective simulator-based assessment study. The Auditory Biophysics Laboratory at Western University, London, Ontario, Canada. Eleven participants were recruited from the Department of Otolaryngology-Head & Neck Surgery at Western University: four senior otolaryngology consultants and seven junior otolaryngology residents. Educational simulation. Discrimination between expert and novice participants on five primary automated performance metrics: 1) time to completion, 2) surgical errors, 3) incision angle, 4) incision length, and 5) the magnification of the microscope. Automated performance metrics were developed, programmed, and implemented into the simulator. Participants were given a standardized simulator orientation and instructions on myringotomy and tube placement. Each participant then performed 10 procedures and automated metrics were collected. The metrics were analyzed using the Mann-Whitney U test with Bonferroni correction. All metrics discriminated senior otolaryngologists from junior residents with a significance of p < 0.002. Junior residents had 2.8 times more errors compared with the senior otolaryngologists. Senior otolaryngologists took significantly less time to completion compared with junior residents. The senior group also had significantly longer incision lengths, more accurate incision angles, and lower magnification keeping both the umbo and annulus in view. Automated quantitative performance metrics were successfully developed and implemented, and construct validity was established by discriminating between expert and novice participants.

  9. Increasing Flexibility and Agility at the National Reconnaissance Office: Lessons from Modular Design, Occupational Surprise, and Commercial Research and Development Processes

    DTIC Science & Technology

    2013-01-01

    with newly evolved technology. An example from the music industry , which was brought to our attention by one of the modularity experts with whom we...Super CD. Yet, Super CDs do not exist today. What happened? The reason that Super CDs have not been commercialized is because the music industry got it... music industry either failed to identify this parallelism or simply followed the wrong metric. The lesson is that predicting future needs is not

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Department of Energy (DOE) has contracted with Asea Brown Boveri-Combustion Engineering (ABB-CE) to provide information on the capability of ABB-CE`s System 80 + Advanced Light Water Reactor (ALWR) to transform, through reactor burnup, 100 metric tonnes (MT) of weapons grade plutonium (Pu) into a form which is not readily useable in weapons. This information is being developed as part of DOE`s Plutonium Disposition Study, initiated by DOE in response to Congressional action. This document Volume 2, provides a discussion of: Plutonium Fuel Cycle; Technology Needs; Regulatory Considerations; Cost and Schedule Estimates; and Deployment Strategy.

  11. Getting to low-cost algal biofuels: A monograph on conventional and cutting-edge harvesting and extraction technologies

    DOE PAGES

    Coons, James E.; Kalb, Daniel M.; Dale, Taraka; ...

    2014-08-31

    Among the most formidable challenges to algal biofuels is the ability to harvest algae and extract intracellular lipids at low cost and with a positive energy balance. Here, we construct two paradigms that contrast energy requirements and costs of conventional and cutting-edge Harvesting and Extraction (H&E) technologies. By application of the parity criterion and the moderate condition reference state, an energy–cost paradigm is created that allows 1st stage harvesting technologies to be compared with easy reference to the National Alliance for Advanced Biofuels and Bioproducts (NAABB) target of $0.013/gallon of gasoline equivalent (GGE) and to the U.S. DOE's Bioenergy Technologiesmore » Office 2022 cost metrics. Drawing from the moderate condition reference state, a concentration-dependency paradigm is developed for extraction technologies, making easier comparison to the National Algal Biofuels Technology Roadmap (NABTR) target of less than 10% total energy. This monograph identifies cost-bearing factors for a variety of H&E technologies, describes a design basis for ultrasonic harvesters, and provides a framework to measure future technological advancements toward reducing H&E costs. Finally, we show that ultrasonic harvesters and extractors are uniquely capable of meeting both NAABB and NABTR targets. Ultrasonic technologies require further development and scale-up before they can achieve low-cost performance at industrially relevant scales. But, the advancement of this technology would greatly reduce H&E costs and accelerate the commercial viability of algae-based biofuels.« less

  12. Word Maturity: A New Metric for Word Knowledge

    ERIC Educational Resources Information Center

    Landauer, Thomas K.; Kireyev, Kirill; Panaccione, Charles

    2011-01-01

    A new metric, Word Maturity, estimates the development by individual students of knowledge of every word in a large corpus. The metric is constructed by Latent Semantic Analysis modeling of word knowledge as a function of the reading that a simulated learner has done and is calibrated by its developing closeness in information content to that of a…

  13. ECONOMIC AND ENVIRONMENTAL ANALYSIS OF TECHNOLOGIES TO TREAT MERCURY AND DISPOSE IN A MONOFILL

    EPA Science Inventory

    If all of the chlor alkali plants in the world shut down, it is estimated that 25-30,000 metric tons of mercury would be available worldwide. This presentation is intended to describe the economic and environmental analysis of a number of technologies for the long term management...

  14. Market Assessment of Forward-Looking Turbulence Sensing Systems

    NASA Technical Reports Server (NTRS)

    Kauffmann, Paul; Sousa-Poza, Andres

    2001-01-01

    In recognition of the importance of turbulence mitigation as a tool to improve aviation safety, NASA's Aviation Safety Program developed a Turbulence Detection and Mitigation Sub-element. The objective of this effort is to develop highly reliable turbulence detection technologies for commercial transport aircraft to sense dangerous turbulence with sufficient time warning so that defensive measures can be implemented and prevent passenger and crew injuries. Current research involves three forward sensing products to improve the cockpit awareness of possible turbulence hazards. X-band radar enhancements will improve the capabilities of current weather radar to detect turbulence associated with convective activity. LIDAR (Light Detection and Ranging) is a laser-based technology that is capable of detecting turbulence in clear air. Finally, a possible Radar-LIDAR hybrid sensor is envisioned to detect the full range of convective and clear air turbulence. To support decisions relating to the development of these three forward-looking turbulence sensor technologies, the objective of this study was defined as examination of cost and implementation metrics. Tasks performed included the identification of cost factors and certification issues, the development and application of an implementation model, and the development of cost budget/targets for installing the turbulence sensor and associated software devices into the commercial transport fleet.

  15. Summer temperature metrics for predicting brook trout (Salvelinus fontinalis) distribution in streams

    USGS Publications Warehouse

    Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.

    2012-01-01

    We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.

  16. Does cone beam CT actually ameliorate stab wound analysis in bone?

    PubMed

    Gaudio, D; Di Giancamillo, M; Gibelli, D; Galassi, A; Cerutti, E; Cattaneo, C

    2014-01-01

    This study aims at verifying the potential of a recent radiological technology, cone beam CT (CBCT), for the reproduction of digital 3D models which may allow the user to verify the inner morphology of sharp force wounds within the bone tissue. Several sharp force wounds were produced by both single and double cutting edge weapons on cancellous and cortical bone, and then acquired by cone beam CT scan. The lesions were analysed by different software (a DICOM file viewer and reverse engineering software). Results verified the limited performances of such technology for lesions made on cortical bone, whereas on cancellous bone reliable models were obtained, and the precise morphology within the bone tissues was visible. On the basis of such results, a method for differential diagnosis between cutmarks by sharp tools with a single and two cutting edges can be proposed. On the other hand, the metrical computerised analysis of lesions highlights a clear increase of error range for measurements under 3 mm. Metric data taken by different operators shows a strong dispersion (% relative standard deviation). This pilot study shows that the use of CBCT technology can improve the investigation of morphological stab wounds on cancellous bone. Conversely metric analysis of the lesions as well as morphological analysis of wound dimension under 3 mm do not seem to be reliable.

  17. Quality measurement for rhinosinusitis: a review from the Quality Improvement Committee of the American Rhinologic Society.

    PubMed

    Rudmik, Luke; Mattos, Jose; Schneider, John; Manes, Peter R; Stokken, Janalee K; Lee, Jivianne; Higgins, Thomas S; Schlosser, Rodney J; Reh, Douglas D; Setzen, Michael; Soler, Zachary M

    2017-09-01

    Measuring quality outcomes is an important prerequisite to improve quality of care. Rhinosinusitis represents a high value target to improve quality of care because it has a high prevalence of disease, large economic burden, and large practice variation. In this study we review the current state of quality measurement for management of both acute (ARS) and chronic rhinosinusitis (CRS). The major national quality metric repositories and clearinghouses were queried. Additional searches included the American Academy of Otolaryngology-Head and Neck Surgery database, PubMed, and Google to attempt to capture any additional quality metrics. Seven quality metrics for ARS and 4 quality metrics for CRS were identified. ARS metrics focused on appropriateness of diagnosis (n = 1), antibiotic prescribing (n = 4), and radiologic imaging (n = 2). CRS quality metrics focused on appropriateness of diagnosis (n = 1), radiologic imaging (n = 1), and measurement of patient quality of life (n = 2). The Physician Quality Reporting System (PQRS) currently tracks 3 ARS quality metrics and 1 CRS quality metric. There are no outcome-based rhinosinusitis quality metrics and no metrics that assess domains of safety, patient-centeredness, and timeliness of care. The current status of quality measurement for rhinosinusitis has focused primarily on the quality domain of efficiency and process measures for ARS. More work is needed to develop, validate, and track outcome-based quality metrics along with CRS-specific metrics. Although there has been excellent work done to improve quality measurement for rhinosinusitis, there remain major gaps and challenges that need to be considered during the development of future metrics. © 2017 ARS-AAOA, LLC.

  18. What Research Says to the Teacher: Metric Education.

    ERIC Educational Resources Information Center

    Goldbecker, Sheralyn S.

    How measurement systems developed is briefly reviewed, followed by comments on the international conversion to the metric system and a lengthier discussion of the history of the metric controversy in the U.S. Statements made by supporters of the customary and metric systems are listed. The role of education is detailed in terms of teacher…

  19. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 1 2014-04-01 2014-04-01 false Metric system of measurement. 84.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 84.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  20. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Metric system of measurement. 84.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 84.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  1. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 1 2013-04-01 2013-04-01 false Metric system of measurement. 84.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 84.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  2. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false Metric system of measurement. 84.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 84.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  3. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Metric system of measurement. 84.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 84.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  4. NATO Code of Best Practice for Command and Control Assessment (Code OTAN des meilleures pratiques pour l’evaluation du commandement et du controle)

    DTIC Science & Technology

    2004-01-01

    Based Research Inc . 1595 Spring Hill Road, Suite 250 Vienna, VA 22182-2216 Wheatleyg@je.jfcom.mil Mr. J. Wilder UNITED STATES US. Army Training...Sarin, 2000). Collaboration C2 Metrics The following collaboration metrics have evolved out of work done by Evidence Based Research, Inc . for the...Enemy, Troops, Terrain, Troops, Time, and Civil considerations OOTW Operations Other Than War PESTLE Political, Economic, Social, Technological

  5. Social image quality

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Kheiri, Ahmed

    2011-01-01

    Current subjective image quality assessments have been developed in the laboratory environments, under controlledconditions, and are dependent on the participation of limited numbers of observers. In this research, with the help of Web 2.0 and social media technology, a new method for building a subjective image quality metric has been developed where the observers are the Internet users. A website with a simple user interface that enables Internet users from anywhere at any time to vote for a better quality version of a pair of the same image has been constructed. Users' votes are recorded and used to rank the images according to their perceived visual qualities. We have developed three rank aggregation algorithms to process the recorded pair comparison data, the first uses a naive approach, the second employs a Condorcet method, and the third uses the Dykstra's extension of Bradley-Terry method. The website has been collecting data for about three months and has accumulated over 10,000 votes at the time of writing this paper. Results show that the Internet and its allied technologies such as crowdsourcing offer a promising new paradigm for image and video quality assessment where hundreds of thousands of Internet users can contribute to building more robust image quality metrics. We have made Internet user generated social image quality (SIQ) data of a public image database available online (http://www.hdri.cs.nott.ac.uk/siq/) to provide the image quality research community with a new source of ground truth data. The website continues to collect votes and will include more public image databases and will also be extended to include videos to collect social video quality (SVQ) data. All data will be public available on the website in due course.

  6. SU-F-E-01: Pitfalls: Reliability and Performance of Diagnostic X-Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behling, R

    2016-06-15

    Purpose: Performance and reliability of medical X-ray tubes for imaging are crucial from an ethical, clinical and economic perspective. This lecture will deliver insight into the aspects to consider during the decision making process to invest in X-ray imaging equipment. Outdated metric still hampers realistic product comparison. It is time to change this and to comply with latest standards, which consider current technology. Failure modes and ways to avoid down-time of the equipment shall be discussed. In view of the increasing number of interventional procedures and the hazards associated with ionizing radiation, toxic contrast agents, and the combination thereof, themore » aspect of system reliability is of paramount importance. Methods: A comprehensive picture of trends for different modalities (CT, angiography, general radiology) has been drawn and led to the development of novel X-ray tube technology. Results: Recent X-ray tubes feature enhanced reliability and unprecedented performance. Relevant metrics for product comparison still have to be implemented in practice. Conclusion: The speed of scientific and industrial development of new diagnostic and therapeutic X-ray sources remains tremendous. Still, users suffer from gaps between desire and reality in day-to-day diagnostic routine. X-ray sources are still limiting cutting-edge medical procedures. Side-effects of wear and tear, limitations of the clinical work flow, costs, the characteristics of the X-ray spectrum and others topics need to be further addressed. New applications and modalities, like detection-based color-resolved X-ray and phase-contrast / dark-field imaging will impact the course of new developments of X-ray sources. The author is employee of Royal Philips.« less

  7. Comparative Simulation Study of Glucose Control Methods Designed for Use in the Intensive Care Unit Setting via a Novel Controller Scoring Metric.

    PubMed

    DeJournett, Jeremy; DeJournett, Leon

    2017-11-01

    Effective glucose control in the intensive care unit (ICU) setting has the potential to decrease morbidity and mortality rates and thereby decrease health care expenditures. To evaluate what constitutes effective glucose control, typically several metrics are reported, including time in range, time in mild and severe hypoglycemia, coefficient of variation, and others. To date, there is no one metric that combines all of these individual metrics to give a number indicative of overall performance. We proposed a composite metric that combines 5 commonly reported metrics, and we used this composite metric to compare 6 glucose controllers. We evaluated the following controllers: Ideal Medical Technologies (IMT) artificial-intelligence-based controller, Yale protocol, Glucommander, Wintergerst et al PID controller, GRIP, and NICE-SUGAR. We evaluated each controller across 80 simulated patients, 4 clinically relevant exogenous dextrose infusions, and one nonclinical infusion as a test of the controller's ability to handle difficult situations. This gave a total of 2400 5-day simulations, and 585 604 individual glucose values for analysis. We used a random walk sensor error model that gave a 10% MARD. For each controller, we calculated severe hypoglycemia (<40 mg/dL), mild hypoglycemia (40-69 mg/dL), normoglycemia (70-140 mg/dL), hyperglycemia (>140 mg/dL), and coefficient of variation (CV), as well as our novel controller metric. For the controllers tested, we achieved the following median values for our novel controller scoring metric: IMT: 88.1, YALE: 46.7, GLUC: 47.2, PID: 50, GRIP: 48.2, NICE: 46.4. The novel scoring metric employed in this study shows promise as a means for evaluating new and existing ICU-based glucose controllers, and it could be used in the future to compare results of glucose control studies in critical care. The IMT AI-based glucose controller demonstrated the most consistent performance results based on this new metric.

  8. Thought Leaders during Crises in Massive Social Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corley, Courtney D.; Farber, Robert M.; Reynolds, William

    The vast amount of social media data that can be gathered from the internet coupled with workflows that utilize both commodity systems and massively parallel supercomputers, such as the Cray XMT, open new vistas for research to support health, defense, and national security. Computer technology now enables the analysis of graph structures containing more than 4 billion vertices joined by 34 billion edges along with metrics and massively parallel algorithms that exhibit near-linear scalability according to number of processors. The challenge lies in making this massive data and analysis comprehensible to an analyst and end-users that require actionable knowledge tomore » carry out their duties. Simply stated, we have developed language and content agnostic techniques to reduce large graphs built from vast media corpora into forms people can understand. Specifically, our tools and metrics act as a survey tool to identify thought leaders' -- those members that lead or reflect the thoughts and opinions of an online community, independent of the source language.« less

  9. Addressing the Real-World Challenges in the Development of Propulsion IVHM Technology Experiment (PITEX)

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Chicatelli, Amy; Fulton, Christopher E.; Balaban, Edward; Sweet, Adam; Hayden, Sandra Claire; Bajwa, Anupa

    2005-01-01

    The Propulsion IVHM Technology Experiment (PITEX) has been an on-going research effort conducted over several years. PITEX has developed and applied a model-based diagnostic system for the main propulsion system of the X-34 reusable launch vehicle, a space-launch technology demonstrator. The application was simulation-based using detailed models of the propulsion subsystem to generate nominal and failure scenarios during captive carry, which is the most safety-critical portion of the X-34 flight. Since no system-level testing of the X-34 Main Propulsion System (MPS) was performed, these simulated data were used to verify and validate the software system. Advanced diagnostic and signal processing algorithms were developed and tested in real-time on flight-like hardware. In an attempt to expose potential performance problems, these PITEX algorithms were subject to numerous real-world effects in the simulated data including noise, sensor resolution, command/valve talkback information, and nominal build variations. The current research has demonstrated the potential benefits of model-based diagnostics, defined the performance metrics required to evaluate the diagnostic system, and studied the impact of real-world challenges encountered when monitoring propulsion subsystems.

  10. Resilience Metrics for the Electric Power System: A Performance-Based Approach.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vugrin, Eric D.; Castillo, Andrea R; Silva-Monroy, Cesar Augusto

    Grid resilience is a concept related to a power system's ability to continue operating and delivering power even in the event that low probability, high-consequence disruptions such as hurricanes, earthquakes, and cyber-attacks occur. Grid resilience objectives focus on managing and, ideally, minimizing potential consequences that occur as a result of these disruptions. Currently, no formal grid resilience definitions, metrics, or analysis methods have been universally accepted. This document describes an effort to develop and describe grid resilience metrics and analysis methods. The metrics and methods described herein extend upon the Resilience Analysis Process (RAP) developed by Watson et al. formore » the 2015 Quadrennial Energy Review. The extension allows for both outputs from system models and for historical data to serve as the basis for creating grid resilience metrics and informing grid resilience planning and response decision-making. This document describes the grid resilience metrics and analysis methods. Demonstration of the metrics and methods is shown through a set of illustrative use cases.« less

  11. Hybrid Communication Architectures for Distributed Smart Grid Applications

    DOE PAGES

    Zhang, Jianhua; Hasandka, Adarsh; Wei, Jin; ...

    2018-04-09

    Wired and wireless communications both play an important role in the blend of communications technologies necessary to enable future smart grid communications. Hybrid networks exploit independent mediums to extend network coverage and improve performance. However, whereas individual technologies have been applied in simulation networks, as far as we know there is only limited attention that has been paid to the development of a suite of hybrid communication simulation models for the communications system design. Hybrid simulation models are needed to capture the mixed communication technologies and IP address mechanisms in one simulation. To close this gap, we have developed amore » suite of hybrid communication system simulation models to validate the critical system design criteria for a distributed solar Photovoltaic (PV) communications system, including a single trip latency of 300 ms, throughput of 9.6 Kbps, and packet loss rate of 1%. In conclusion, the results show that three low-power wireless personal area network (LoWPAN)-based hybrid architectures can satisfy three performance metrics that are critical for distributed energy resource communications.« less

  12. Hybrid Communication Architectures for Distributed Smart Grid Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jianhua; Hasandka, Adarsh; Wei, Jin

    Wired and wireless communications both play an important role in the blend of communications technologies necessary to enable future smart grid communications. Hybrid networks exploit independent mediums to extend network coverage and improve performance. However, whereas individual technologies have been applied in simulation networks, as far as we know there is only limited attention that has been paid to the development of a suite of hybrid communication simulation models for the communications system design. Hybrid simulation models are needed to capture the mixed communication technologies and IP address mechanisms in one simulation. To close this gap, we have developed amore » suite of hybrid communication system simulation models to validate the critical system design criteria for a distributed solar Photovoltaic (PV) communications system, including a single trip latency of 300 ms, throughput of 9.6 Kbps, and packet loss rate of 1%. In conclusion, the results show that three low-power wireless personal area network (LoWPAN)-based hybrid architectures can satisfy three performance metrics that are critical for distributed energy resource communications.« less

  13. Implicit Contractive Mappings in Modular Metric and Fuzzy Metric Spaces

    PubMed Central

    Hussain, N.; Salimi, P.

    2014-01-01

    The notion of modular metric spaces being a natural generalization of classical modulars over linear spaces like Lebesgue, Orlicz, Musielak-Orlicz, Lorentz, Orlicz-Lorentz, and Calderon-Lozanovskii spaces was recently introduced. In this paper we investigate the existence of fixed points of generalized α-admissible modular contractive mappings in modular metric spaces. As applications, we derive some new fixed point theorems in partially ordered modular metric spaces, Suzuki type fixed point theorems in modular metric spaces and new fixed point theorems for integral contractions. In last section, we develop an important relation between fuzzy metric and modular metric and deduce certain new fixed point results in triangular fuzzy metric spaces. Moreover, some examples are provided here to illustrate the usability of the obtained results. PMID:25003157

  14. The Role of Science and Technology in Competitiveness. Hearings before the Subcommittee on Science, Research and Technology of the Committee on Science, Space, and Technology, House of Representatives. One Hundredth Congress, First Session, April 28-30, 1987.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.

    This document contains the text of three days of hearings before the United States House of Representatives Subcommittee on Science, Research, and Technology. The hearings were held to consider the role of science and engineering in world competitiveness, particularly with regard to the United States' nonmetric posture in a metric world. Testimony…

  15. Summary Report on Federal Laboratory Technology Transfer: FY 2003 Activity Metrics and Outcomes. 2004 Report to the President and the Congress under the Technology Transfer and Commercialization Act

    DTIC Science & Technology

    2004-12-01

    Agency, FY 1999-2003 Table 1.1 – Overview of the Types of Information on Federal lab Technology Transfer Collected in the...invention disclosure, patenting, and licensing. Table 1.1 – Overview of the Types of Information on Federal Lab Technology Transfer Collected in...results. In addition, ARS hosts a Textile Manufacturing Symposium and a Cotton Ginning Symposium at gin and textile labs to benefit county extension

  16. Aerothermal Instrumentation Loads To Implement Aeroassist Technology in Future Robotic and Human Missions to MARS and Other Locations Within the Solar System

    NASA Technical Reports Server (NTRS)

    Parmar, Devendra S.; Shams, Qamar A.

    2002-01-01

    The strategy of NASA to explore space objects in the vicinity of Earth and other planets of the solar system includes robotic and human missions. This strategy requires a road map for technology development that will support the robotic exploration and provide safety for the humans traveling to other celestial bodies. Aeroassist is one of the key elements of technology planning for the success of future robot and human exploration missions to other celestial bodies. Measurement of aerothermodynamic parameters such as temperature, pressure, and acceleration is of prime importance for aeroassist technology implementation and for the safety and affordability of the mission. Instrumentation and methods to measure such parameters have been reviewed in this report in view of past practices, current commercial availability of instrumentation technology, and the prospects of improvement and upgrade according to the requirements. Analysis of the usability of each identified instruments in terms of cost for efficient weight-volume ratio, power requirement, accuracy, sample rates, and other appropriate metrics such as harsh environment survivability has been reported.

  17. Imaginable Technologies for Human Missions to Mars

    NASA Technical Reports Server (NTRS)

    Bushnell, Dennis M.

    2007-01-01

    The thesis of the present discussion is that the simultaneous cost and inherent safety issues of human on-site exploration of Mars will require advanced-to-revolutionary technologies. The major crew safety issues as currently identified include reduced gravity, radiation, potentially extremely toxic dust and the requisite reliability for years-long missions. Additionally, this discussion examines various technological areas which could significantly impact Human-Mars cost and safety. Cost reductions for space access is a major metric, including approaches to significantly reduce the overall up-mass. Besides fuel, propulsion and power systems, the up-mass consists of the infrastructure and supplies required to keep humans healthy and the equipment for executing exploration mission tasks. Hence, the major technological areas of interest for potential cost reductions include propulsion, in-space and on-planet power, life support systems, materials and overall architecture, systems, and systems-of-systems approaches. This discussion is specifically offered in response to and as a contribution to goal 3 of the Presidential Exploration Vision: "Develop the Innovative Technologies Knowledge and Infrastructures both to explore and to support decisions about the destinations for human exploration".

  18. TH-E-9A-01: Medical Physics 1.0 to 2.0, Session 4: Computed Tomography, Ultrasound and Nuclear Medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samei, E; Nelson, J; Hangiandreou, N

    Medical Physics 2.0 is a bold vision for an existential transition of clinical imaging physics in face of the new realities of value-based and evidencebased medicine, comparative effectiveness, and meaningful use. It speaks to how clinical imaging physics can expand beyond traditional insular models of inspection and acceptance testing, oriented toward compliance, towards team-based models of operational engagement, prospective definition and assurance of effective use, and retrospective evaluation of clinical performance. Organized into four sessions of the AAPM, this particular session focuses on three specific modalities as outlined below. CT 2.0: CT has been undergoing a dramatic transition in themore » last few decades. While the changes in the technology merits discussions of their own, an important question is how clinical medical physicists are expected to effectively engage with the new realities of CT technology and practice. Consistent with the upcoming paradigm of Medical Physics 2.0, this CT presentation aims to provide definitions and demonstration of the components of the new clinical medical physics practice pertaining CT. The topics covered include physics metrics and analytics that aim to provide higher order clinicallyrelevant quantification of system performance as pertains to new (and not so new) technologies. That will include the new radiation and dose metrics (SSDE, organ dose, risk indices), image quality metrology (MTF/NPS/d’), task-based phantoms, and the effect of patient size. That will follow with a discussion of the testing implication of new CT hardware (detectors, tubes), acquisition methods (innovative helical geometries, AEC, wide beam CT, dual energy, inverse geometry, application specialties), and image processing and analysis (iterative reconstructions, quantitative CT, advanced renditions). The presentation will conclude with a discussion of clinical and operational aspects of Medical Physics 2.0 including training and communication, use optimization (dose and technique factors), automated analysis and data management (automated QC methods, protocol tracking, dose monitoring, issue tracking), and meaningful QC considerations. US 2.0: Ultrasound imaging is evolving at a rapid pace, adding new imaging functions and modes that continue to enhance its clinical utility and benefits to patients. The ultrasound talk will look ahead 10–15 years and consider how medical physicists can bring maximal value to the clinical ultrasound practices of the future. The roles of physics in accreditation and regulatory compliance, image quality and exam optimization, clinical innovation, and education of staff and trainees will all be considered. A detailed examination of expected technology evolution and impact on image quality metrics will be presented. Clinical implementation of comprehensive physics services will also be discussed. Nuclear Medicine 2.0: Although the basic science of nuclear imaging has remained relatively unchanged since its inception, advances in instrumentation continue to advance the field into new territories. With a great number of these advances occurring over the past decade, the role and testing strategies of clinical nuclear medicine physicists must evolve in parallel. The Nuclear Medicine 2.0 presentation is designed to highlight some of the recent advances from a clinical medical physicist perspective and provide ideas and motivation for designing better evaluation strategies. Topics include improvement of traditional physics metrics and analytics, testing implications of hybrid imaging and advanced detector technologies, and strategies for effective implementation into the clinic. Learning Objectives: Become familiar with new physics metrics and analytics in nuclear medicine, CT, and ultrasound. To become familiar with the major new developments of clinical physics support. To understand the physics testing implications of new technologies, hardware, software, and applications. Identify approaches for implementing comprehensive medical physics services in future imaging practices.« less

  19. Evaluation of Candidate Millimeter Wave Sensors for Synthetic Vision

    NASA Technical Reports Server (NTRS)

    Alexander, Neal T.; Hudson, Brian H.; Echard, Jim D.

    1994-01-01

    The goal of the Synthetic Vision Technology Demonstration Program was to demonstrate and document the capabilities of current technologies to achieve safe aircraft landing, take off, and ground operation in very low visibility conditions. Two of the major thrusts of the program were (1) sensor evaluation in measured weather conditions on a tower overlooking an unused airfield and (2) flight testing of sensor and pilot performance via a prototype system. The presentation first briefly addresses the overall technology thrusts and goals of the program and provides a summary of MMW sensor tower-test and flight-test data collection efforts. Data analysis and calibration procedures for both the tower tests and flight tests are presented. The remainder of the presentation addresses the MMW sensor flight-test evaluation results, including the processing approach for determination of various performance metrics (e.g., contrast, sharpness, and variability). The variation of the very important contrast metric in adverse weather conditions is described. Design trade-off considerations for Synthetic Vision MMW sensors are presented.

  20. Real Time Metrics and Analysis of Integrated Arrival, Departure, and Surface Operations

    NASA Technical Reports Server (NTRS)

    Sharma, Shivanjli; Fergus, John

    2017-01-01

    To address the Integrated Arrival, Departure, and Surface (IADS) challenge, NASA is developing and demonstrating trajectory-based departure automation under a collaborative effort with the FAA and industry known Airspace Technology Demonstration 2 (ATD-2). ATD-2 builds upon and integrates previous NASA research capabilities that include the Spot and Runway Departure Advisor (SARDA), the Precision Departure Release Capability (PDRC), and the Terminal Sequencing and Spacing (TSAS) capability. As trajectory-based departure scheduling and collaborative decision making tools are introduced in order to reduce delays and uncertainties in taxi and climb operations across the National Airspace System, users of the tools across a number of roles benefit from a real time system that enables common situational awareness. A real time dashboard was developed to inform and present users notifications and integrated information regarding airport surface operations. The dashboard is a supplement to capabilities and tools that incorporate arrival, departure, and surface air-traffic operations concepts in a NextGen environment. In addition to shared situational awareness, the dashboard offers the ability to compute real time metrics and analysis to inform users about capacity, predictability, and efficiency of the system as a whole. This paper describes the architecture of the real time dashboard as well as an initial proposed set of metrics. The potential impact of the real time dashboard is studied at the site identified for initial deployment and demonstration in 2017: Charlotte-Douglas International Airport (CLT). The architecture of implementing such a tool as well as potential uses are presented for operations at CLT. Metrics computed in real time illustrate the opportunity to provide common situational awareness and inform users of system delay, throughput, taxi time, and airport capacity. In addition, common awareness of delays and the impact of takeoff and departure restrictions stemming from traffic flow management initiatives are explored. The potential of the real time tool to inform users of the predictability and efficiency of using a trajectory-based departure scheduling system is also discussed.

  1. Status and Outlook of Metric Conversion of Standards: The Views of Nine Selected Major Standards Development Bodies.

    DTIC Science & Technology

    1982-05-01

    insufficient need for a hard metric version of the ASME Boiler and Pressure Vessel Code and industry would not support the metric version. The Code Is not...aircraft industry is concerned with certification requirements in metric units. The inch-pound Boiler and Pressure Vessel Code is the current standard

  2. New light field camera based on physical based rendering tracing

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  3. Essential metrics for assessing sex & gender integration in health research proposals involving human participants.

    PubMed

    Day, Suzanne; Mason, Robin; Tannenbaum, Cara; Rochon, Paula A

    2017-01-01

    Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all.

  4. Metrics to assess injury prevention programs for young workers in high-risk occupations: a scoping review of the literature.

    PubMed

    Jennifer, Smith; Purewal, Birinder Praneet; Macpherson, Alison; Pike, Ian

    2018-05-01

    Despite legal protections for young workers in Canada, youth aged 15-24 are at high risk of traumatic occupational injury. While many injury prevention initiatives targeting young workers exist, the challenge faced by youth advocates and employers is deciding what aspect(s) of prevention will be the most effective focus for their efforts. A review of the academic and grey literatures was undertaken to compile the metrics-both the indicators being evaluated and the methods of measurement-commonly used to assess injury prevention programs for young workers. Metrics are standards of measurement through which efficiency, performance, progress, or quality of a plan, process, or product can be assessed. A PICO framework was used to develop search terms. Medline, PubMed, OVID, EMBASE, CCOHS, PsychINFO, CINAHL, NIOSHTIC, Google Scholar and the grey literature were searched for articles in English, published between 1975-2015. Two independent reviewers screened the resulting list and categorized the metrics in three domains of injury prevention: Education, Environment and Enforcement. Of 174 acquired articles meeting the inclusion criteria, 21 both described and assessed an intervention. Half were educational in nature (N=11). Commonly assessed metrics included: knowledge, perceptions, self-reported behaviours or intentions, hazardous exposures, injury claims, and injury counts. One study outlined a method for developing metrics to predict injury rates. Metrics specific to the evaluation of young worker injury prevention programs are needed, as current metrics are insufficient to predict reduced injuries following program implementation. One study, which the review brought to light, could be an appropriate model for future research to develop valid leading metrics specific to young workers, and then apply these metrics to injury prevention programs for youth.

  5. Essential metrics for assessing sex & gender integration in health research proposals involving human participants

    PubMed Central

    Mason, Robin; Tannenbaum, Cara; Rochon, Paula A.

    2017-01-01

    Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all. PMID:28854192

  6. MJO simulation in CMIP5 climate models: MJO skill metrics and process-oriented diagnosis

    NASA Astrophysics Data System (ADS)

    Ahn, Min-Seop; Kim, Daehyun; Sperber, Kenneth R.; Kang, In-Sik; Maloney, Eric; Waliser, Duane; Hendon, Harry

    2017-12-01

    The Madden-Julian Oscillation (MJO) simulation diagnostics developed by MJO Working Group and the process-oriented MJO simulation diagnostics developed by MJO Task Force are applied to 37 Coupled Model Intercomparison Project phase 5 (CMIP5) models in order to assess model skill in representing amplitude, period, and coherent eastward propagation of the MJO, and to establish a link between MJO simulation skill and parameterized physical processes. Process-oriented diagnostics include the Relative Humidity Composite based on Precipitation (RHCP), Normalized Gross Moist Stability (NGMS), and the Greenhouse Enhancement Factor (GEF). Numerous scalar metrics are developed to quantify the results. Most CMIP5 models underestimate MJO amplitude, especially when outgoing longwave radiation (OLR) is used in the evaluation, and exhibit too fast phase speed while lacking coherence between eastward propagation of precipitation/convection and the wind field. The RHCP-metric, indicative of the sensitivity of simulated convection to low-level environmental moisture, and the NGMS-metric, indicative of the efficiency of a convective atmosphere for exporting moist static energy out of the column, show robust correlations with a large number of MJO skill metrics. The GEF-metric, indicative of the strength of the column-integrated longwave radiative heating due to cloud-radiation interaction, is also correlated with the MJO skill metrics, but shows relatively lower correlations compared to the RHCP- and NGMS-metrics. Our results suggest that modifications to processes associated with moisture-convection coupling and the gross moist stability might be the most fruitful for improving simulations of the MJO. Though the GEF-metric exhibits lower correlations with the MJO skill metrics, the longwave radiation feedback is highly relevant for simulating the weak precipitation anomaly regime that may be important for the establishment of shallow convection and the transition to deep convection.

  7. MJO simulation in CMIP5 climate models: MJO skill metrics and process-oriented diagnosis

    DOE PAGES

    Ahn, Min-Seop; Kim, Daehyun; Sperber, Kenneth R.; ...

    2017-03-23

    The Madden-Julian Oscillation (MJO) simulation diagnostics developed by MJO Working Group and the process-oriented MJO simulation diagnostics developed by MJO Task Force are applied to 37 Coupled Model Intercomparison Project phase 5 (CMIP5) models in order to assess model skill in representing amplitude, period, and coherent eastward propagation of the MJO, and to establish a link between MJO simulation skill and parameterized physical processes. Process-oriented diagnostics include the Relative Humidity Composite based on Precipitation (RHCP), Normalized Gross Moist Stability (NGMS), and the Greenhouse Enhancement Factor (GEF). Numerous scalar metrics are developed to quantify the results. Most CMIP5 models underestimate MJOmore » amplitude, especially when outgoing longwave radiation (OLR) is used in the evaluation, and exhibit too fast phase speed while lacking coherence between eastward propagation of precipitation/convection and the wind field. The RHCP-metric, indicative of the sensitivity of simulated convection to low-level environmental moisture, and the NGMS-metric, indicative of the efficiency of a convective atmosphere for exporting moist static energy out of the column, show robust correlations with a large number of MJO skill metrics. The GEF-metric, indicative of the strength of the column-integrated longwave radiative heating due to cloud-radiation interaction, is also correlated with the MJO skill metrics, but shows relatively lower correlations compared to the RHCP- and NGMS-metrics. Our results suggest that modifications to processes associated with moisture-convection coupling and the gross moist stability might be the most fruitful for improving simulations of the MJO. Though the GEF-metric exhibits lower correlations with the MJO skill metrics, the longwave radiation feedback is highly relevant for simulating the weak precipitation anomaly regime that may be important for the establishment of shallow convection and the transition to deep convection.« less

  8. MJO simulation in CMIP5 climate models: MJO skill metrics and process-oriented diagnosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Min-Seop; Kim, Daehyun; Sperber, Kenneth R.

    The Madden-Julian Oscillation (MJO) simulation diagnostics developed by MJO Working Group and the process-oriented MJO simulation diagnostics developed by MJO Task Force are applied to 37 Coupled Model Intercomparison Project phase 5 (CMIP5) models in order to assess model skill in representing amplitude, period, and coherent eastward propagation of the MJO, and to establish a link between MJO simulation skill and parameterized physical processes. Process-oriented diagnostics include the Relative Humidity Composite based on Precipitation (RHCP), Normalized Gross Moist Stability (NGMS), and the Greenhouse Enhancement Factor (GEF). Numerous scalar metrics are developed to quantify the results. Most CMIP5 models underestimate MJOmore » amplitude, especially when outgoing longwave radiation (OLR) is used in the evaluation, and exhibit too fast phase speed while lacking coherence between eastward propagation of precipitation/convection and the wind field. The RHCP-metric, indicative of the sensitivity of simulated convection to low-level environmental moisture, and the NGMS-metric, indicative of the efficiency of a convective atmosphere for exporting moist static energy out of the column, show robust correlations with a large number of MJO skill metrics. The GEF-metric, indicative of the strength of the column-integrated longwave radiative heating due to cloud-radiation interaction, is also correlated with the MJO skill metrics, but shows relatively lower correlations compared to the RHCP- and NGMS-metrics. Our results suggest that modifications to processes associated with moisture-convection coupling and the gross moist stability might be the most fruitful for improving simulations of the MJO. Though the GEF-metric exhibits lower correlations with the MJO skill metrics, the longwave radiation feedback is highly relevant for simulating the weak precipitation anomaly regime that may be important for the establishment of shallow convection and the transition to deep convection.« less

  9. A formulation of multidimensional growth models for the assessment and forecast of technology attributes

    NASA Astrophysics Data System (ADS)

    Danner, Travis W.

    Developing technology systems requires all manner of investment---engineering talent, prototypes, test facilities, and more. Even for simple design problems the investment can be substantial; for complex technology systems, the development costs can be staggering. The profitability of a corporation in a technology-driven industry is crucially dependent on maximizing the effectiveness of research and development investment. Decision-makers charged with allocation of this investment are forced to choose between the further evolution of existing technologies and the pursuit of revolutionary technologies. At risk on the one hand is excessive investment in an evolutionary technology which has only limited availability for further improvement. On the other hand, the pursuit of a revolutionary technology may mean abandoning momentum and the potential for substantial evolutionary improvement resulting from the years of accumulated knowledge. The informed answer to this question, evolutionary or revolutionary, requires knowledge of the expected rate of improvement and the potential a technology offers for further improvement. This research is dedicated to formulating the assessment and forecasting tools necessary to acquire this knowledge. The same physical laws and principles that enable the development and improvement of specific technologies also limit the ultimate capability of those technologies. Researchers have long used this concept as the foundation for modeling technological advancement through extrapolation by analogy to biological growth models. These models are employed to depict technology development as it asymptotically approaches limits established by the fundamental principles on which the technological approach is based. This has proven an effective and accurate approach to modeling and forecasting simple single-attribute technologies. With increased system complexity and the introduction of multiple system objectives, however, the usefulness of this modeling technique begins to diminish. With the introduction of multiple objectives, researchers often abandon technology growth models for scoring models and technology frontiers. While both approaches possess advantages over current growth models for the assessment of multi-objective technologies, each lacks a necessary dimension for comprehensive technology assessment. By collapsing multiple system metrics into a single, non-intuitive technology measure, scoring models provide a succinct framework for multi-objective technology assessment and forecasting. Yet, with no consideration of physical limits, scoring models provide no insight as to the feasibility of a particular combination of system capabilities. They only indicate that a given combination of system capabilities yields a particular score. Conversely, technology frontiers are constructed with the distinct objective of providing insight into the feasibility of system capability combinations. Yet again, upper limits to overall system performance are ignored. Furthermore, the data required to forecast subsequent technology frontiers is often inhibitive. In an attempt to reincorporate the fundamental nature of technology advancement as bound by physical principles, researchers have sought to normalize multi-objective systems whereby the variability of a single system objective is eliminated as a result of changes in the remaining objectives. This drastically limits the applicability of the resulting technology model because it is only applicable for a single setting of all other system attributes. Attempts to maintain the interaction between the growth curves of each technical objective of a complex system have thus far been limited to qualitative and subjective consideration. This research proposes the formulation of multidimensional growth models as an approach to simulating the advancement of multi-objective technologies towards their upper limits. Multidimensional growth models were formulated by noticing and exploiting the correlation between technology growth models and technology frontiers. Both are frontiers in actuality. The technology growth curve is a frontier between capability levels of a single attribute and time, while a technology frontier is a frontier between the capability levels of two or more attributes. Multidimensional growth models are formulated by exploiting the mathematical significance of this correlation. The result is a model that can capture both the interaction between multiple system attributes and their expected rates of improvement over time. The fundamental nature of technology development is maintained, and interdependent growth curves are generated for each system metric with minimal data requirements. Being founded on the basic nature of technology advancement, relative to physical limits, the availability for further improvement can be determined for a single metric relative to other system measures of merit. A by-product of this modeling approach is a single n-dimensional technology frontier linking all n system attributes with time. This provides an environment capable of forecasting future system capability in the form of advancing technology frontiers. The ability of a multidimensional growth model to capture the expected improvement of a specific technological approach is dependent on accurately identifying the physical limitations to each pertinent attribute. This research investigates two potential approaches to identifying those physical limits, a physics-based approach and a regression-based approach. The regression-based approach has found limited acceptance among forecasters, although it does show potential for estimating upper limits with a specified degree of uncertainty. Forecasters have long favored physics-based approaches for establishing the upper limit to unidimensional growth models. The task of accurately identifying upper limits has become increasingly difficult with the extension of growth models into multiple dimensions. A lone researcher may be able to identify the physical limitation to a single attribute of a simple system; however, as system complexity and the number of attributes increases, the attention of researchers from multiple fields of study is required. Thus, limit identification is itself an area of research and development requiring some level of investment. Whether estimated by physics or regression-based approaches, predicted limits will always have some degree of uncertainty. This research takes the approach of quantifying the impact of that uncertainty on model forecasts rather than heavily endorsing a single technique to limit identification. In addition to formulating the multidimensional growth model, this research provides a systematic procedure for applying that model to specific technology architectures. Researchers and decision-makers are able to investigate the potential for additional improvement within that technology architecture and to estimate the expected cost of each incremental improvement relative to the cost of past improvements. In this manner, multidimensional growth models provide the necessary information to set reasonable program goals for the further evolution of a particular technological approach or to establish the need for revolutionary approaches in light of the constraining limits of conventional approaches.

  10. Robot-Aided Neurorehabilitation: A Pediatric Robot for Ankle Rehabilitation

    PubMed Central

    Michmizos, Konstantinos P.; Rossi, Stefano; Castelli, Enrico; Cappa, Paolo; Krebs, Hermano Igo

    2015-01-01

    This paper presents the pediAnklebot, an impedance-controlled low-friction, backdriveable robotic device developed at the Massachusetts Institute of Technology that trains the ankle of neurologically impaired children of ages 6-10 years old. The design attempts to overcome the known limitations of the lower extremity robotics and the unknown difficulties of what constitutes an appropriate therapeutic interaction with children. The robot's pilot clinical evaluation is on-going and it incorporates our recent findings on the ankle sensorimotor control in neurologically intact subjects, namely the speed-accuracy tradeoff, the deviation from an ideally smooth ankle trajectory, and the reaction time. We used these concepts to develop the kinematic and kinetic performance metrics that guided the ankle therapy in a similar fashion that we have done for our upper extremity devices. Here we report on the use of the device in at least 9 training sessions for 3 neurologically impaired children. Results demonstrated a statistically significant improvement in the performance metrics assessing explicit and implicit motor learning. Based on these initial results, we are confident that the device will become an effective tool that harnesses plasticity to guide habilitation during childhood. PMID:25769168

  11. Academic health sciences library Website navigation: an analysis of forty-one Websites and their navigation tools.

    PubMed

    Brower, Stewart M

    2004-10-01

    The analysis included forty-one academic health sciences library (HSL) Websites as captured in the first two weeks of January 2001. Home pages and persistent navigational tools (PNTs) were analyzed for layout, technology, and links, and other general site metrics were taken. Websites were selected based on rank in the National Network of Libraries of Medicine, with regional and resource libraries given preference on the basis that these libraries are recognized as leaders in their regions and would be the most reasonable source of standards for best practice. A three-page evaluation tool was developed based on previous similar studies. All forty-one sites were evaluated in four specific areas: library general information, Website aids and tools, library services, and electronic resources. Metrics taken for electronic resources included orientation of bibliographic databases alphabetically by title or by subject area and with links to specifically named databases. Based on the results, a formula for determining obligatory links was developed, listing items that should appear on all academic HSL Web home pages and PNTs. These obligatory links demonstrate a series of best practices that may be followed in the design and construction of academic HSL Websites.

  12. Development and evaluation of aperture-based complexity metrics using film and EPID measurements of static MLC openings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Götstedt, Julia; Karlsson Hauer, Anna; Bäck, Anna, E-mail: anna.back@vgregion.se

    Purpose: Complexity metrics have been suggested as a complement to measurement-based quality assurance for intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). However, these metrics have not yet been sufficiently validated. This study develops and evaluates new aperture-based complexity metrics in the context of static multileaf collimator (MLC) openings and compares them to previously published metrics. Methods: This study develops the converted aperture metric and the edge area metric. The converted aperture metric is based on small and irregular parts within the MLC opening that are quantified as measured distances between MLC leaves. The edge area metricmore » is based on the relative size of the region around the edges defined by the MLC. Another metric suggested in this study is the circumference/area ratio. Earlier defined aperture-based complexity metrics—the modulation complexity score, the edge metric, the ratio monitor units (MU)/Gy, the aperture area, and the aperture irregularity—are compared to the newly proposed metrics. A set of small and irregular static MLC openings are created which simulate individual IMRT/VMAT control points of various complexities. These are measured with both an amorphous silicon electronic portal imaging device and EBT3 film. The differences between calculated and measured dose distributions are evaluated using a pixel-by-pixel comparison with two global dose difference criteria of 3% and 5%. The extent of the dose differences, expressed in terms of pass rate, is used as a measure of the complexity of the MLC openings and used for the evaluation of the metrics compared in this study. The different complexity scores are calculated for each created static MLC opening. The correlation between the calculated complexity scores and the extent of the dose differences (pass rate) are analyzed in scatter plots and using Pearson’s r-values. Results: The complexity scores calculated by the edge area metric, converted aperture metric, circumference/area ratio, edge metric, and MU/Gy ratio show good linear correlation to the complexity of the MLC openings, expressed as the 5% dose difference pass rate, with Pearson’s r-values of −0.94, −0.88, −0.84, −0.89, and −0.82, respectively. The overall trends for the 3% and 5% dose difference evaluations are similar. Conclusions: New complexity metrics are developed. The calculated scores correlate to the complexity of the created static MLC openings. The complexity of the MLC opening is dependent on the penumbra region relative to the area of the opening. The aperture-based complexity metrics that combined either the distances between the MLC leaves or the MLC opening circumference with the aperture area show the best correlation with the complexity of the static MLC openings.« less

  13. Diffusion of robotics into clinical practice in the United States: process, patient safety, learning curves, and the public health.

    PubMed

    Mirheydar, Hossein S; Parsons, J Kellogg

    2013-06-01

    Robotic technology disseminated into urological practice without robust comparative effectiveness data. To review the diffusion of robotic surgery into urological practice. We performed a comprehensive literature review focusing on diffusion patterns, patient safety, learning curves, and comparative costs for robotic radical prostatectomy, partial nephrectomy, and radical cystectomy. Robotic urologic surgery diffused in patterns typical of novel technology spreading among practicing surgeons. Robust evidence-based data comparing outcomes of robotic to open surgery were sparse. Although initial Level 3 evidence for robotic prostatectomy observed complication outcomes similar to open prostatectomy, subsequent population-based Level 2 evidence noted an increased prevalence of adverse patient safety events and genitourinary complications among robotic patients during the early years of diffusion. Level 2 evidence indicated comparable to improved patient safety outcomes for robotic compared to open partial nephrectomy and cystectomy. Learning curve recommendations for robotic urologic surgery have drawn exclusively on Level 4 evidence and subjective, non-validated metrics. The minimum number of cases required to achieve competency for robotic prostatectomy has increased to unrealistically high levels. Most comparative cost-analyses have demonstrated that robotic surgery is significantly more expensive than open or laparoscopic surgery. Evidence-based data are limited but suggest an increased prevalence of adverse patient safety events for robotic prostatectomy early in the national diffusion period. Learning curves for robotic urologic surgery are subjective and based on non-validated metrics. The urological community should develop rigorous, evidence-based processes by which future technological innovations may diffuse in an organized and safe manner.

  14. Math Roots: The Beginnings of the Metric System

    ERIC Educational Resources Information Center

    Johnson, Art; Norris, Kit; Adams,Thomasina Lott, Ed.

    2007-01-01

    This article reviews the history of the metric system, from a proposal of a sixteenth-century mathematician to its implementation in Revolutionary France some 200 years later. Recent developments in the metric system are also discussed.

  15. Technology Plan for the Terrestrial Planet Finder Interferometer

    NASA Technical Reports Server (NTRS)

    Lawson, Peter R. (Editor); Dooley, Jennifer A. (Editor)

    2005-01-01

    The technology plan for the Terrestrial Planet Finder Interferometer (TPF-I) describes the breadth of technology development currently envisaged to enable TPF-I to search for habitable worlds around nearby stars. TPF-I is currently in Pre-Phase A (the Advanced Study Phase) of its development. For planning purposes, it is expected to enter into Phase A in 2010 and be launched sometime before 2020. TPF-I is being developed concurrently with the Terrestrial Planet Finder Coronagraph (TPF-C), whose launch is anticipated in 201 6. The missions are being designed with the capability to detect Earth-like planets should they exist in the habitable zones of Sun-like (F,G, and K) stars out to a distance of about 60 light-years. Each mission will have the starlight-suppression and spectroscopic capability to enable the characterization of extrasolar planetary atmospheres, identifying biomarkers and signs of life. TPF-C is designed as a visible-light coronagraph; TPF-I is designed as a mid-infrared formation-flying interferometer. The two missions, working together, promise to yield unambiguous detections and characterizations of Earth-like planets. The challenges of planet detections with mid-infrared formation-flying interferometry are described within this technology plan. The approach to developing the technology is described through roadmaps that lead from our current state of the art through the different phases of mission development to launch. Technology metrics and milestones are given to measure progress. The emphasis of the plan is development and acquisition of technology during pre-Phase A to establish feasibility of the mission to enter Phase A sometime around 2010. Plans beyond 2010 are outlined. The plan contains descriptions of the development of new component technology as well as testbeds that demonstrate the viability of new techniques and technology required for the mission. Starlight-suppression (nulling) and formation-flying technology are highlighted. Although the techniques are described herein, the descriptions are only at a high-level, and tutorial material is not included. The reader is expected to have some familiarity with the principles of long-baseline mid-infrared interferometry. Selected references to existing literature are given where relevant.

  16. A ride in the time machine: information management capabilities health departments will need.

    PubMed

    Foldy, Seth; Grannis, Shaun; Ross, David; Smith, Torney

    2014-09-01

    We have proposed needed information management capabilities for future US health departments predicated on trends in health care reform and health information technology. Regardless of whether health departments provide direct clinical services (and many will), they will manage unprecedented quantities of sensitive information for the public health core functions of assurance and assessment, including population-level health surveillance and metrics. Absent improved capabilities, health departments risk vestigial status, with consequences for vulnerable populations. Developments in electronic health records, interoperability and information exchange, public information sharing, decision support, and cloud technologies can support information management if health departments have appropriate capabilities. The need for national engagement in and consensus on these capabilities and their importance to health department sustainability make them appropriate for consideration in the context of accreditation.

  17. A reservoir morphology database for the conterminous United States

    USGS Publications Warehouse

    Rodgers, Kirk D.

    2017-09-13

    The U.S. Geological Survey, in cooperation with the Reservoir Fisheries Habitat Partnership, combined multiple national databases to create one comprehensive national reservoir database and to calculate new morphological metrics for 3,828 reservoirs. These new metrics include, but are not limited to, shoreline development index, index of basin permanence, development of volume, and other descriptive metrics based on established morphometric formulas. The new database also contains modeled chemical and physical metrics. Because of the nature of the existing databases used to compile the Reservoir Morphology Database and the inherent missing data, some metrics were not populated. One comprehensive database will assist water-resource managers in their understanding of local reservoir morphology and water chemistry characteristics throughout the continental United States.

  18. Variational and robust density fitting of four-center two-electron integrals in local metrics

    NASA Astrophysics Data System (ADS)

    Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjærgaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Høst, Stinne; Salek, Paweł

    2008-09-01

    Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.

  19. Variational and robust density fitting of four-center two-electron integrals in local metrics.

    PubMed

    Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjaergaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Host, Stinne; Salek, Paweł

    2008-09-14

    Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.

  20. 77 FR 11558 - Submission for OMB Review; Comment Request; STAR METRICS (Science and Technology for America's...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-27

    ... Research on Innovation, Competitiveness and Science) SUMMARY: Under the provisions of section 3507(a)(1)(D... (NIH) has submitted to the Office of Management and Budget (OMB) a request to review and approve the... (Science and Technology for America's Reinvestment: Measuring the EffecTs of Research on Innovation...

  1. 77 FR 11135 - Submission for OMB Review; Comment Request: STAR METRICS (Science and Technology for America's...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-24

    ... Research on Innovation, Competitiveness and Science) Summary: Under the provisions of section 3507(a)(1)(D... (NIH) has submitted to the Office of Management and Budget (OMB) a request to review and approve the... Technology for America's Reinvestment: Measuring the EffecTs of Research on Innovation, Competitiveness and...

  2. Improving the Effectiveness of Program Managers

    DTIC Science & Technology

    2006-05-03

    Improving the Effectiveness of Program Managers Systems and Software Technology Conference Salt Lake City, Utah May 3, 2006 Presented by GAO’s...Companies’ best practices Motorola Caterpillar Toyota FedEx NCR Teradata Boeing Hughes Space and Communications Disciplined software and management...and total ownership costs Collection of metrics data to improve software reliability Technology readiness levels and design maturity Statistical

  3. Implementation and Evaluation of Multiple Adaptive Control Technologies for a Generic Transport Aircraft Simulation

    NASA Technical Reports Server (NTRS)

    Campbell, Stefan F.; Kaneshige, John T.; Nguyen, Nhan T.; Krishakumar, Kalmanje S.

    2010-01-01

    Presented here is the evaluation of multiple adaptive control technologies for a generic transport aircraft simulation. For this study, seven model reference adaptive control (MRAC) based technologies were considered. Each technology was integrated into an identical dynamic-inversion control architecture and tuned using a methodology based on metrics and specific design requirements. Simulation tests were then performed to evaluate each technology s sensitivity to time-delay, flight condition, model uncertainty, and artificially induced cross-coupling. The resulting robustness and performance characteristics were used to identify potential strengths, weaknesses, and integration challenges of the individual adaptive control technologies

  4. A Novel Approach for Using Dielectric Spectroscopy to Predict Viable Cell Volume (VCV) in Early Process Development

    PubMed Central

    Downey, Brandon J; Graham, Lisa J; Breit, Jeffrey F; Glutting, Nathaniel K

    2014-01-01

    Online monitoring of viable cell volume (VCV) is essential to the development, monitoring, and control of bioprocesses. The commercial availability of steam-sterilizable dielectric-spectroscopy probes has enabled successful adoption of this technology as a key noninvasive method to measure VCV for cell-culture processes. Technological challenges still exist, however. For some cell lines, the technique's accuracy in predicting the VCV from probe-permittivity measurements declines as the viability of the cell culture decreases. To investigate the cause of this decrease in accuracy, divergences in predicted vs. actual VCV measurements were directly related to the shape of dielectric frequency scans collected during a cell culture. The changes in the shape of the beta dispersion, which are associated with changes in cell state, are quantified by applying a novel “area ratio” (AR) metric to frequency-scanning data from the dielectric-spectroscopy probes. The AR metric is then used to relate the shape of the beta dispersion to single-frequency permittivity measurements to accurately predict the offline VCV throughout an entire fed-batch run, regardless of cell state. This work demonstrates the possible feasibility of quantifying the shape of the beta dispersion, determined from frequency-scanning data, for enhanced measurement of VCV in mammalian cell cultures by applying a novel shape-characterization technique. In addition, this work demonstrates the utility of using changes in the shape of the beta dispersion to quantify cell health. © 2013 American Institute of Chemical Engineers Biotechnol. Prog., 30:479–487, 2014 PMID:24851255

  5. TECHNOLOGICAL INNOVATION IN NEUROSURGERY: A QUANTITATIVE STUDY

    PubMed Central

    Marcus, Hani J; Hughes-Hallett, Archie; Kwasnicki, Richard M; Darzi, Ara; Yang, Guang-Zhong; Nandi, Dipankar

    2015-01-01

    Object Technological innovation within healthcare may be defined as the introduction of a new technology that initiates a change in clinical practice. Neurosurgery is a particularly technologically intensive surgical discipline, and new technologies have preceded many of the major advances in operative neurosurgical technique. The aim of the present study was to quantitatively evaluate technological innovation in neurosurgery using patents and peer-reviewed publications as metrics of technology development and clinical translation respectively. Methods A patent database was searched between 1960 and 2010 using the search terms “neurosurgeon” OR “neurosurgical” OR “neurosurgery”. The top 50 performing patent codes were then grouped into technology clusters. Patent and publication growth curves were then generated for these technology clusters. A top performing technology cluster was then selected as an exemplar for more detailed analysis of individual patents. Results In all, 11,672 patents and 208,203 publications relating to neurosurgery were identified. The top performing technology clusters over the 50 years were: image guidance devices, clinical neurophysiology devices, neuromodulation devices, operating microscopes and endoscopes. Image guidance and neuromodulation devices demonstrated a highly correlated rapid rise in patents and publications, suggesting they are areas of technology expansion. In-depth analysis of neuromodulation patents revealed that the majority of high performing patents were related to Deep Brain Stimulation (DBS). Conclusions Patent and publication data may be used to quantitatively evaluate technological innovation in neurosurgery. PMID:25699414

  6. GRC GSFC TDRSS Waveform Metrics Report

    NASA Technical Reports Server (NTRS)

    Mortensen, Dale J.

    2013-01-01

    The report presents software metrics and porting metrics for the GGT Waveform. The porting was from a ground-based COTS SDR, the SDR-3000, to the CoNNeCT JPL SDR. The report does not address any of the Operating Environment (OE) software development, nor the original TDRSS waveform development at GSFC for the COTS SDR. With regard to STRS, the report presents compliance data and lessons learned.

  7. An Overview of Controls and Flying Qualities Technology on the F/A-18 High Alpha Research Vehicle

    NASA Technical Reports Server (NTRS)

    Pahle, Joseph W.; Wichman, Keith D.; Foster, John V.; Bundick, W. Thomas

    1996-01-01

    The NASA F/A-18 High Alpha Research Vehicle (HARV) has been the flight test bed of a focused technology effort to significantly increase maneuvering capability at high angles of attack. Development and flight test of control law design methodologies, handling qualities metrics, performance guidelines, and flight evaluation maneuvers are described. The HARV has been modified to include two research control effectors, thrust vectoring, and actuated forebody strakes in order to provide increased control power at high angles of attack. A research flight control system has been used to provide a flexible, easily modified capability for high-angle-of-attack research controls. Different control law design techniques have been implemented and flight-tested, including eigenstructure assignment, variable gain output feedback, pseudo controls, and model-following. Extensive piloted simulation has been used to develop nonlinear performance guide-lines and handling qualities criteria for high angles of attack. This paper reviews the development and evaluation of technologies useful for high-angle-of-attack control. Design, development, and flight test of the research flight control system, control laws, flying qualities specifications, and flight test maneuvers are described. Flight test results are used to illustrate some of the lessons learned during flight test and handling qualities evaluations.

  8. Energy-Based Metrics for Arthroscopic Skills Assessment.

    PubMed

    Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa

    2017-08-05

    Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.

  9. State of the States 2009. Renewable Energy Development and the Role of Policy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doris, Elizabeth; McLaren, Joyce; Healey, Victoria

    2009-10-01

    This report tracks the progress of U.S. renewable energy development at the state level, with metrics on development status and reviews of relevant policies. The analysis offers state-by-state policy suggestions and develops performance-based evaluation metrics to accelerate and improve renewable energy development.

  10. The use of smartphones to influence lifestyle changes in overweight and obese youth with congenital heart disease: a single-arm study: Pilot and feasibility study protocol: Smart Heart Trial.

    PubMed

    Rombeek, Meghan; De Jesus, Stefanie; Altamirano-Diaz, Luis; Welisch, Eva; Prapavessis, Harry; Seabrook, Jamie A; Norozi, Kambiz

    2017-01-01

    Both obesity and congenital heart disease (CHD) are risk factors for the long-term cardiovascular health of children and adolescents. The addition of smart mobile technology to conventional lifestyle counseling for weight management offers great potential to appeal to technologically literate youth and can address a large geographical area with minimal burden to participants. This pilot study seeks to examine the influence of a 1-year lifestyle intervention on nutrition and physical activity-related health outcomes in overweight or obese children and adolescents with CHD. This is a pilot and feasibility study which utilizes a single-arm, prospective design with a goal to recruit 40 overweight and obese patients. The feasibility metrics will evaluate the integrity of the study protocol, data collection and questionnaires, recruitment and consent, and acceptability of the intervention protocol and primary outcome measures. The primary clinical outcome metrics are anthropometry, body composition, and cardiorespiratory exercise capacity. The secondary clinical metrics include quality of life, nutrition and physical activity behavior, lung and muscle function, and cardio-metabolic risk factors. Outcomes are assessed at baseline, 6 months, and 1 year. To date, a total of 36 children and youth (11 girls), aged 7-17 years (mean = 14.4 years), have commenced the intervention. Recruitment for the study was initiated in June 2012 and is currently ongoing. The information provided in this paper is intended to help researchers and health professionals with the development and evaluation of similar lifestyle intervention programs. Since the application of smartphones to pediatric cardiac health and obesity management is a novel approach, and continued research in this area is warranted, this paper may serve as a foundation for further exploration of this health frontier and inform the development of a broader strategy for obesity management in pediatric cardiology. This pilot study was retrospectively registered at the www.ClinicalTrials.gov registry as NCT02980393 in November 2016, with the study commencing in May 2012. Study protocol version 15OCT2014.

  11. Inflatable Space Structures Technology Development for Large Radar Antennas

    NASA Technical Reports Server (NTRS)

    Freeland, R. E.; Helms, Richard G.; Willis, Paul B.; Mikulas, M. M.; Stuckey, Wayne; Steckel, Gary; Watson, Judith

    2004-01-01

    There has been recent interest in inflatable space-structures technology for possible applications on U.S. Department of Defense (DOD) missions because of the technology's potential for high mechanical-packaging efficiency, variable stowed geometry, and deployment reliability. In recent years, the DOD sponsored Large Radar Antenna (LRA) Program applied this new technology to a baseline concept: a rigidizable/inflatable (RI) perimeter-truss structure supporting a mesh/net parabolic reflector antenna. The program addressed: (a) truss concept development, (b) regidizable materials concepts assessment, (c) mesh/net concept selection and integration, and (d) developed potential mechanical-system performance estimates. Critical and enabling technologies were validated, most notably the orbital radiation durable regidized materials and the high modulus, inflatable-deployable truss members. These results in conjunction with conclusions from previous mechanical-packaging studies by the U.S. Defense Advanced Research Projects Agency (DARPA) Special Program Office (SPO) were the impetus for the initiation of the DARPA/SPO Innovative Space-based Antenna Technology (ISAT) Program. The sponsor's baseline concept consisted of an inflatable-deployable truss structure for support of a large number of rigid, active radar panels. The program's goal was to determine the risk associated with the application of these new RI structures to the latest in radar technologies. The approach used to define the technology maturity level of critical structural elements was to: (a) develop truss concept baseline configurations (s), (b) assess specific inflatable-rigidizable materials technologies, and (c) estimate potential mechanical performance. The results of the structures portion of the program indicated there was high risk without the essential materials technology flight experiments, but only moderate risk if the appropriate on-orbit demonstrations were performed. This paper covers both programs (LRA and ISAT) in two sections, Parts 1 and 2 respectively. Please note that the terms strut, tube, and column are all used interchangeably and refer to the basic strut element of a truss. Also, the paper contains a mix of English and metric dimensional descriptions that reflect prevailing technical discipline conventions and common usage.

  12. Image quality assessment metric for frame accumulated image

    NASA Astrophysics Data System (ADS)

    Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling

    2018-01-01

    The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.

  13. Polycrystalline silicon availability for photovoltaic and semiconductor industries

    NASA Technical Reports Server (NTRS)

    Ferber, R. R.; Costogue, E. N.; Pellin, R.

    1982-01-01

    Markets, applications, and production techniques for Siemens process-produced polycrystalline silicon are surveyed. It is noted that as of 1982 a total of six Si materials suppliers were servicing a worldwide total of over 1000 manufacturers of Si-based devices. Besides solar cells, the Si wafers are employed for thyristors, rectifiers, bipolar power transistors, and discrete components for control systems. An estimated 3890 metric tons of semiconductor-grade polycrystalline Si will be used in 1982, and 6200 metric tons by 1985. Although the amount is expected to nearly triple between 1982-89, research is being carried out on the formation of thin films and ribbons for solar cells, thereby eliminating the waste produced in slicing Czolchralski-grown crystals. The free-world Si production in 1982 is estimated to be 3050 metric tons. Various new technologies for the formation of polycrystalline Si at lower costs and with less waste are considered. New entries into the industrial Si formation field are projected to produce a 2000 metric ton excess by 1988.

  14. JPDO Portfolio Analysis of NextGen

    DTIC Science & Technology

    2009-09-01

    runways. C. Metrics The JPDO Interagency Portfolio & Systems Analysis ( IPSA ) division continues to coordinate, develop, and refine the metrics and...targets associated with the NextGen initiatives with the partner agencies & stakeholder communities. IPSA has formulated a set of top-level metrics as...metrics are calculated from system performance measures that constitute outputs of the American Institute of Aeronautics and Astronautics 8 IPSA

  15. An Introduction to the SI Metric System. Inservice Guide for Teaching Measurement, Kindergarten Through Grade Eight.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento.

    This handbook was designed to serve as a reference for teacher workshops that: (1) introduce the metric system and help teachers gain confidence with metric measurement, and (2) develop classroom measurement activities. One chapter presents the history and basic features of SI metrics. A second chapter presents a model for the measurement program.…

  16. 40 CFR 98.67 - Records that must be retained.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... aluminum production in metric tons. (b) Type of smelter technology used. (c) The following PFC-specific information on a monthly basis: (1) Perfluoromethane and perfluoroethane emissions from anode effects in...

  17. 40 CFR 98.67 - Records that must be retained.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... aluminum production in metric tons. (b) Type of smelter technology used. (c) The following PFC-specific information on a monthly basis: (1) Perfluoromethane and perfluoroethane emissions from anode effects in...

  18. 40 CFR 98.67 - Records that must be retained.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... aluminum production in metric tons. (b) Type of smelter technology used. (c) The following PFC-specific information on a monthly basis: (1) Perfluoromethane and perfluoroethane emissions from anode effects in...

  19. Developing Metrics in Systems Integration (ISS Program COTS Integration Model)

    NASA Technical Reports Server (NTRS)

    Lueders, Kathryn

    2007-01-01

    This viewgraph presentation reviews some of the complications in developing metrics for systems integration. Specifically it reviews a case study of how two programs within NASA try to develop and measure performance while meeting the encompassing organizational goals.

  20. Specification-based software sizing: An empirical investigation of function metrics

    NASA Technical Reports Server (NTRS)

    Jeffery, Ross; Stathis, John

    1993-01-01

    For some time the software industry has espoused the need for improved specification-based software size metrics. This paper reports on a study of nineteen recently developed systems in a variety of application domains. The systems were developed by a single software services corporation using a variety of languages. The study investigated several metric characteristics. It shows that: earlier research into inter-item correlation within the overall function count is partially supported; a priori function counts, in themself, do not explain the majority of the effort variation in software development in the organization studied; documentation quality is critical to accurate function identification; and rater error is substantial in manual function counting. The implication of these findings for organizations using function based metrics are explored.

  1. Sustainability assessment of tertiary wastewater treatment technologies: a multi-criteria analysis.

    PubMed

    Plakas, K V; Georgiadis, A A; Karabelas, A J

    2016-01-01

    The multi-criteria analysis gives the opportunity to researchers, designers and decision-makers to examine decision options in a multi-dimensional fashion. On this basis, four tertiary wastewater treatment (WWT) technologies were assessed regarding their sustainability performance in producing recycled wastewater, considering a 'triple bottom line' approach (i.e. economic, environmental, and social). These are powdered activated carbon adsorption coupled with ultrafiltration membrane separation (PAC-UF), reverse osmosis, ozone/ultraviolet-light oxidation and heterogeneous photo-catalysis coupled with low-pressure membrane separation (photocatalytic membrane reactor, PMR). The participatory method called simple multi-attribute rating technique exploiting ranks was employed for assigning weights to selected sustainability indicators. This sustainability assessment approach resulted in the development of a composite index as a final metric, for each WWT technology evaluated. The PAC-UF technology appears to be the most appropriate technology, attaining the highest composite value regarding the sustainability performance. A scenario analysis confirmed the results of the original scenario in five out of seven cases. In parallel, the PMR was highlighted as the technology with the least variability in its performance. Nevertheless, additional actions and approaches are proposed to strengthen the objectivity of the final results.

  2. 48 CFR 2811.002 - Policy.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... whenever metric is the accepted industry system. Whenever possible, commercially developed metric... include requirements documents stated in soft metric, hybrid, or dual systems, except when impractical or... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Policy. 2811.002 Section...

  3. 48 CFR 2811.002 - Policy.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... whenever metric is the accepted industry system. Whenever possible, commercially developed metric... include requirements documents stated in soft metric, hybrid, or dual systems, except when impractical or... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Policy. 2811.002 Section...

  4. 48 CFR 2811.002 - Policy.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... whenever metric is the accepted industry system. Whenever possible, commercially developed metric... include requirements documents stated in soft metric, hybrid, or dual systems, except when impractical or... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Policy. 2811.002 Section...

  5. 48 CFR 2811.002 - Policy.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... whenever metric is the accepted industry system. Whenever possible, commercially developed metric... include requirements documents stated in soft metric, hybrid, or dual systems, except when impractical or... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Policy. 2811.002 Section...

  6. 48 CFR 2811.002 - Policy.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... whenever metric is the accepted industry system. Whenever possible, commercially developed metric... include requirements documents stated in soft metric, hybrid, or dual systems, except when impractical or... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Policy. 2811.002 Section...

  7. Laser Processed Silver Nanowire Network Transparent Electrodes for Novel Electronic Devices

    NASA Astrophysics Data System (ADS)

    Spechler, Joshua Allen

    Silver nanowire network transparent conducting layers are poised to make headway into a space previously dominated by transparent conducting oxides due to the promise of a flexible, scaleable, lab-atmosphere processable alternative. However, there are many challenges standing in the way between research scale use and consumer technology scale adaptation of this technology. In this thesis we will explore many, and overcome a few of these challenges. We will address the poor conductivity at the narrow nanowire-nanowire junction points in the network by developing a laser based process to weld nanowires together on a microscopic scale. We address the need for a comparative metric for transparent conductors in general, by taking a device level rather than a component level view of these layers. We also address the mechanical, physical, and thermal limitations to the silver nanowire networks by making composites from materials including a colorless polyimide and titania sol-gel. Additionally, we verify our findings by integrating these processes into devices. Studying a hybrid organic/inorganic heterojunction photovoltaic device we show the benefits of a laser processed electrode. Green phosphorescent organic light emitting diodes fabricated on a solution phase processed silver nanowire based electrode show favorable device metrics compared to a conductive oxide electrode based control. The work in this thesis is intended to push the adoption of silver nanowire networks to further allow new device architectures, and thereby new device applications.

  8. Pilot study of methods and equipment for in-home noise level measurements.

    PubMed

    Neitzel, Richard L; Heikkinen, Maire S A; Williams, Christopher C; Viet, Susan Marie; Dellarco, Michael

    2015-01-15

    Knowledge of the auditory and non-auditory effects of noise has increased dramatically over the past decade, but indoor noise exposure measurement methods have not advanced appreciably, despite the introduction of applicable new technologies. This study evaluated various conventional and smart devices for exposure assessment in the National Children's Study. Three devices were tested: a sound level meter (SLM), a dosimeter, and a smart device with a noise measurement application installed. Instrument performance was evaluated in a series of semi-controlled tests in office environments over 96-hour periods, followed by measurements made continuously in two rooms (a child's bedroom and a most used room) in nine participating homes over a 7-day period with subsequent computation of a range of noise metrics. The SLMs and dosimeters yielded similar A-weighted average noise levels. Levels measured by the smart devices often differed substantially (showing both positive and negative bias, depending on the metric) from those measured via SLM and dosimeter, and demonstrated attenuation in some frequency bands in spectral analysis compared to SLM results. Virtually all measurements exceeded the Environmental Protection Agency's 45 dBA day-night limit for indoor residential exposures. The measurement protocol developed here can be employed in homes, demonstrates the possibility of measuring long-term noise exposures in homes with technologies beyond traditional SLMs, and highlights potential pitfalls associated with measurements made by smart devices.

  9. Development of a Multimetric Indicator of Pelagic Zooplankton ...

    EPA Pesticide Factsheets

    We used zooplankton data collected for the 2012 National Lakes Assessment (NLA) to develop multimetric indices (MMIs) for five aggregated ecoregions of the conterminous USA (Coastal Plains, Eastern Highlands, Plains, Upper Midwest, and Western Mountains and Xeric [“West’]). We classified candidate metrics into six categories: We evaluated the performance of candidate metrics, and used metrics that had passed these screens to calculate all possible candidate MMIs that included at least one metric from each category. We selected the candidate MMI that had high responsiveness, a reasonable value for repeatability, low mean pairwise correlation among component metrics, and, when possible, a maximum pairwise correlation among component metrics that was <0.7. We were able to develop MMIs that were sufficiently responsive and repeatable to assess ecological condition for the NLA without the need to reduce the effects of natural variation using models. We did not observe effects of either lake size, lake origin, or site depth on the MMIs. The MMIs appear to respond more strongly to increased nutrient concentrations than to shoreline habitat conditions. Improving our understanding of how zooplankton assemblages respond to increased human disturbance, and obtaining more complete autecological information for zooplankton taxa would likely improve MMIs developed for future assessments. Using zooplankton assemblage data from the 2012 National Lakes Assessment (NLA),

  10. Associations between forest characteristics and socio-economic development: a case study from Portugal.

    PubMed

    Ribeiro, Sónia Carvalho; Lovett, Andrew

    2009-07-01

    The integration of socio-economic and environmental objectives is a major challenge in developing strategies for sustainable landscapes. We investigated associations between socio-economic variables, landscape metrics and measures of forest condition in the context of Portugal. The main goals of the study were to 1) investigate relationships between forest conditions and measures of socio-economic development at national and regional scales, 2) test the hypothesis that a systematic variation in forest landscape metrics occurs according to the stage of socio-economic development and, 3) assess the extent to which landscape metrics can inform strategies to enhance forest sustainability. A ranking approach and statistical techniques such as Principal Component Analysis were used to achieve these objectives. Relationships between socio-economic characteristics, landscape metrics and measures of forest condition were only significant in the regional analysis of municipalities in Northern Portugal. Landscape metrics for different tree species displayed significant variations across socio-economic groups of municipalities and these differences were consistent with changes in characteristics suggested by the forest transition model. The use of metrics also helped inform place-specific strategies to improve forest management, though it was also apparent that further work was required to better incorporate differences in forest functions into sustainability planning.

  11. An annual plant growth proxy in the Mojave Desert using MODIS-EVI data

    USGS Publications Warehouse

    Wallace, C.S.A.; Thomas, K.A.

    2008-01-01

    In the arid Mojave Desert, the phenological response of vegetation is largely dependent upon the timing and amount of rainfall, and maps of annual plant cover at any one point in time can vary widely. Our study developed relative annual plant growth models as proxies for annual plant cover using metrics that captured phenological variability in Moderate-Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) satellite images. We used landscape phenologies revealed in MODIS data together with ecological knowledge of annual plant seasonality to develop a suite of metrics to describe annual growth on a yearly basis. Each of these metrics was applied to temporally-composited MODIS-EVI images to develop a relative model of annual growth. Each model was evaluated by testing how well it predicted field estimates of annual cover collected during 2003 and 2005 at the Mojave National Preserve. The best performing metric was the spring difference metric, which compared the average of three spring MODIS-EVI composites of a given year to that of 2002, a year of record drought. The spring difference metric showed correlations with annual plant cover of R2 = 0.61 for 2005 and R 2 = 0.47 for 2003. Although the correlation is moderate, we consider it supportive given the characteristics of the field data, which were collected for a different study in a localized area and are not ideal for calibration to MODIS pixels. A proxy for annual growth potential was developed from the spring difference metric of 2005 for use as an environmental data layer in desert tortoise habitat modeling. The application of the spring difference metric to other imagery years presents potential for other applications such as fuels, invasive species, and dust-emission monitoring in the Mojave Desert.

  12. An Annual Plant Growth Proxy in the Mojave Desert Using MODIS-EVI Data.

    PubMed

    Wallace, Cynthia S A; Thomas, Kathryn A

    2008-12-03

    In the arid Mojave Desert, the phenological response of vegetation is largely dependent upon the timing and amount of rainfall, and maps of annual plant cover at any one point in time can vary widely. Our study developed relative annual plant growth models as proxies for annual plant cover using metrics that captured phenological variability in Moderate-Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) satellite images. We used landscape phenologies revealed in MODIS data together with ecological knowledge of annual plant seasonality to develop a suite of metrics to describe annual growth on a yearly basis. Each of these metrics was applied to temporally-composited MODIS-EVI images to develop a relative model of annual growth. Each model was evaluated by testing how well it predicted field estimates of annual cover collected during 2003 and 2005 at the Mojave National Preserve. The best performing metric was the spring difference metric, which compared the average of three spring MODIS-EVI composites of a given year to that of 2002, a year of record drought. The spring difference metric showed correlations with annual plant cover of R² = 0.61 for 2005 and R² = 0.47 for 2003. Although the correlation is moderate, we consider it supportive given the characteristics of the field data, which were collected for a different study in a localized area and are not ideal for calibration to MODIS pixels. A proxy for annual growth potential was developed from the spring difference metric of 2005 for use as an environmental data layer in desert tortoise habitat modeling. The application of the spring difference metric to other imagery years presents potential for other applications such as fuels, invasive species, and dust-emission monitoring in the Mojave Desert.

  13. Utility of Thin-Film Solar Cells on Flexible Substrates for Space Power

    NASA Technical Reports Server (NTRS)

    Dickman, J. E.; Hepp, A. F.; Morel, D. L.; Ferekides, C. S.; Tuttle, J. R.; Hoffman, D. J.; Dhere, N. G.

    2004-01-01

    The thin-film solar cell program at NASA GRC is developing solar cell technologies for space applications which address two critical metrics: specific power (power per unit mass) and launch stowed volume. To be competitive for many space applications, an array using thin film solar cells must significantly increase specific power while reducing stowed volume when compared to the present baseline technology utilizing crystalline solar cells. The NASA GRC program is developing two approaches. Since the vast majority of the mass of a thin film solar cell is in the substrate, a thin film solar cell on a very lightweight flexible substrate (polymer or metal films) is being developed as the first approach. The second approach is the development of multijunction thin film solar cells. Total cell efficiency can be increased by stacking multiple cells having bandgaps tuned to convert the spectrum passing through the upper cells to the lower cells. Once developed, the two approaches will be merged to yield a multijunction, thin film solar cell on a very lightweight, flexible substrate. The ultimate utility of such solar cells in space require the development of monolithic interconnections, lightweight array structures, and ultra-lightweight support and deployment techniques.

  14. Engineering performance metrics

    NASA Astrophysics Data System (ADS)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  15. Comparison of selective transmitters for solar thermal applications.

    PubMed

    Taylor, Robert A; Hewakuruppu, Yasitha; DeJarnette, Drew; Otanicar, Todd P

    2016-05-10

    Solar thermal collectors are radiative heat exchangers. Their efficacy is dictated predominantly by their absorption of short wavelength solar radiation and, importantly, by their emission of long wavelength thermal radiation. In conventional collector designs, the receiver is coated with a selectively absorbing surface (Black Chrome, TiNOx, etc.), which serves both of these aims. As the leading commercial absorber, TiNOx consists of several thin, vapor deposited layers (of metals and ceramics) on a metal substrate. In this technology, the solar absorption to thermal emission ratio can exceed 20. If a solar system requires an analogous transparent component-one which transmits the full AM1.5 solar spectrum, but reflects long wavelength thermal emission-the technology is much less developed. Bespoke "heat mirrors" are available from optics suppliers at high cost, but the closest mass-produced commercial technology is low-e glass. Low-e glasses are designed for visible light transmission and, as such, they reflect up to 50% of available solar energy. To address this technical gap, this study investigated selected combinations of thin films that could be deposited to serve as transparent, selective solar covers. A comparative numerical analysis of feasible materials and configurations was investigated using a nondimensional metric termed the efficiency factor for selectivity (EFS). This metric is dependent on the operation temperature and solar concentration ratio of the system, so our analysis covered the practical range for these parameters. It was found that thin films of indium tin oxide (ITO) and ZnS-Ag-ZnS provided the highest EFS. Of these, ITO represents the more commercially viable solution for large-scale development. Based on these optimized designs, proof-of-concept ITO depositions were fabricated and compared to commercial depositions. Overall, this study presents a systematic guide for creating a new class of selective, transparent optics for solar thermal collectors.

  16. Landscape of Innovation for Cardiovascular Pharmaceuticals: From Basic Science to New Molecular Entities.

    PubMed

    Beierlein, Jennifer M; McNamee, Laura M; Walsh, Michael J; Kaitin, Kenneth I; DiMasi, Joseph A; Ledley, Fred D

    2017-07-01

    This study examines the complete timelines of translational science for new cardiovascular therapeutics from the initiation of basic research leading to identification of new drug targets through clinical development and US Food and Drug Administration (FDA) approval of new molecular entities (NMEs) based on this research. This work extends previous studies by examining the association between the growth of research on drug targets and approval of NMEs associated with these targets. Drawing on research on innovation in other technology sectors, where technological maturity is an important determinant in the success or failure of new product development, an analytical model was used to characterize the growth of research related to the known targets for all 168 approved cardiovascular therapeutics. Categorizing and mapping the technological maturity of cardiovascular therapeutics reveal that (1) there has been a distinct transition from phenotypic to targeted methods for drug discovery, (2) the durations of clinical and regulatory processes were significantly influenced by changes in FDA practice, and (3) the longest phase of the translational process was the time required for technology to advance from initiation of research to a statistically defined established point of technology maturation (mean, 30.8 years). This work reveals a normative association between metrics of research maturation and approval of new cardiovascular therapeutics and suggests strategies for advancing translational science by accelerating basic and applied research and improving the synchrony between the maturation of this research and drug development initiatives. Copyright © 2017 Elsevier HS Journals, Inc. All rights reserved.

  17. Construction of self-dual codes in the Rosenbloom-Tsfasman metric

    NASA Astrophysics Data System (ADS)

    Krisnawati, Vira Hari; Nisa, Anzi Lina Ukhtin

    2017-12-01

    Linear code is a very basic code and very useful in coding theory. Generally, linear code is a code over finite field in Hamming metric. Among the most interesting families of codes, the family of self-dual code is a very important one, because it is the best known error-correcting code. The concept of Hamming metric is develop into Rosenbloom-Tsfasman metric (RT-metric). The inner product in RT-metric is different from Euclid inner product that is used to define duality in Hamming metric. Most of the codes which are self-dual in Hamming metric are not so in RT-metric. And, generator matrix is very important to construct a code because it contains basis of the code. Therefore in this paper, we give some theorems and methods to construct self-dual codes in RT-metric by considering properties of the inner product and generator matrix. Also, we illustrate some examples for every kind of the construction.

  18. NATO Code of Best Practice for C2 Assessment (revised)

    DTIC Science & Technology

    2002-10-01

    Based Research, Inc . for the United States Office of Naval Research. These collaboration metrics focus on individual and team cognitive/awareness, team...Troops, Time, and Civil considerations OOTW – Operations Other Than War PESTLE – Political, Economic, Social, Technological, Legal, and Environmental...Operational Analysis OOTW Operations Other Than War OR Operations Research P PESTLE Political, Economic, Social, Technological, Legal, and Environmental

  19. Entry, Descent, and Landing for Human Mars Missions

    NASA Technical Reports Server (NTRS)

    Munk, Michelle M.; DwyerCianciolo, Alicia M.

    2012-01-01

    One of the most challenging aspects of a human mission to Mars is landing safely on the Martian surface. Mars has such low atmospheric density that decelerating large masses (tens of metric tons) requires methods that have not yet been demonstrated, and are not yet planned in future Mars missions. To identify the most promising options for Mars entry, descent, and landing, and to plan development of the needed technologies, NASA's Human Architecture Team (HAT) has refined candidate methods for emplacing needed elements of the human Mars exploration architecture (such as ascent vehicles and habitats) on the Mars surface. This paper explains the detailed, optimized simulations that have been developed to define the mass needed at Mars arrival to accomplish the entry, descent, and landing functions. Based on previous work, technology options for hypersonic deceleration include rigid, mid-L/D (lift-to-drag ratio) aeroshells, and inflatable aerodynamic decelerators (IADs). The hypersonic IADs, or HIADs, are about 20% less massive than the rigid vehicles, but both have their technology development challenges. For the supersonic regime, supersonic retropropulsion (SRP) is an attractive option, since a propulsive stage must be carried for terminal descent and can be ignited at higher speeds. The use of SRP eliminates the need for an additional deceleration system, but SRP is at a low Technology Readiness Level (TRL) in that the interacting plumes are not well-characterized, and their effect on vehicle stability has not been studied, to date. These architecture-level assessments have been used to define the key performance parameters and a technology development strategy for achieving the challenging mission of landing large payloads on Mars.

  20. Performance metrics for the evaluation of hyperspectral chemical identification systems

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Golowich, Steven; Manolakis, Dimitris; Ingle, Vinay

    2016-02-01

    Remote sensing of chemical vapor plumes is a difficult but important task for many military and civilian applications. Hyperspectral sensors operating in the long-wave infrared regime have well-demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis testing problem which standard detection metrics do not fully describe. We propose using an additional performance metric for identification based on the so-called Dice index. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and identification metric. Using the proposed metrics, we demonstrate that the intuitive system design of a detector bank followed by an identifier is indeed justified when incorporating performance information beyond the standard detection metrics.

  1. MonALISA, an agent-based monitoring and control system for the LHC experiments

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Kcira, D.; Mughal, A.; Newman, H.; Spiropulu, M.; Vlimant, J. R.

    2017-10-01

    MonALISA, which stands for Monitoring Agents using a Large Integrated Services Architecture, has been developed over the last fifteen years by California Insitute of Technology (Caltech) and its partners with the support of the software and computing program of the CMS and ALICE experiments at the Large Hadron Collider (LHC). The framework is based on Dynamic Distributed Service Architecture and is able to provide complete system monitoring, performance metrics of applications, Jobs or services, system control and global optimization services for complex systems. A short overview and status of MonALISA is given in this paper.

  2. Using TRACI for Sustainability Metrics

    EPA Science Inventory

    TRACI, the Tool for the Reduction and Assessment of Chemical and other environmental Impacts, has been developed for sustainability metrics, life cycle impact assessment, and product and process design impact assessment for developing increasingly sustainable products, processes,...

  3. Architecture and Functionality of the Advanced Life Support On-Line Project Information System (OPIS)

    NASA Technical Reports Server (NTRS)

    Hogan, John A.; Levri, Julie A.; Morrow, Rich; Cavazzoni, Jim; Rodriquez, Luis F.; Riano, Rebecca; Whitaker, Dawn R.

    2004-01-01

    An ongoing effort is underway at NASA Amcs Research Center (ARC) tu develop an On-line Project Information System (OPIS) for the Advanced Life Support (ALS) Program. The objective of this three-year project is to develop, test, revise and deploy OPIS to enhance the quality of decision-making metrics and attainment of Program goals through improved knowledge sharing. OPIS will centrally locate detailed project information solicited from investigators on an annual basis and make it readily accessible by the ALS Community via a web-accessible interface. The data will be stored in an object-oriented relational database (created in MySQL(Trademark) located on a secure server at NASA ARC. OPE will simultaneously serve several functions, including being an R&TD status information hub that can potentially serve as the primary annual reporting mechanism. Using OPIS, ALS managers and element leads will be able to carry out informed research and technology development investment decisions, and allow analysts to perform accurate systems evaluations. Additionally, the range and specificity of information solicited will serve to educate technology developers of programmatic needs. OPIS will collect comprehensive information from all ALS projects as well as highly detailed information specific to technology development in each ALS area (Waste, Water, Air, Biomass, Food, Thermal, and Control). Because the scope of needed information can vary dramatically between areas, element-specific technology information is being compiled with the aid of multiple specialized working groups. This paper presents the current development status in terms of the architecture and functionality of OPIS. Possible implementation approaches for OPIS are also discussed.

  4. Standardised metrics for global surgical surveillance.

    PubMed

    Weiser, Thomas G; Makary, Martin A; Haynes, Alex B; Dziekan, Gerald; Berry, William R; Gawande, Atul A

    2009-09-26

    Public health surveillance relies on standardised metrics to evaluate disease burden and health system performance. Such metrics have not been developed for surgical services despite increasing volume, substantial cost, and high rates of death and disability associated with surgery. The Safe Surgery Saves Lives initiative of WHO's Patient Safety Programme has developed standardised public health metrics for surgical care that are applicable worldwide. We assembled an international panel of experts to develop and define metrics for measuring the magnitude and effect of surgical care in a population, while taking into account economic feasibility and practicability. This panel recommended six measures for assessing surgical services at a national level: number of operating rooms, number of operations, number of accredited surgeons, number of accredited anaesthesia professionals, day-of-surgery death ratio, and postoperative in-hospital death ratio. We assessed the feasibility of gathering such statistics at eight diverse hospitals in eight countries and incorporated them into the WHO Guidelines for Safe Surgery, in which methods for data collection, analysis, and reporting are outlined.

  5. Advanced Metrics for Assessing Holistic Care: The "Epidaurus 2" Project.

    PubMed

    Foote, Frederick O; Benson, Herbert; Berger, Ann; Berman, Brian; DeLeo, James; Deuster, Patricia A; Lary, David J; Silverman, Marni N; Sternberg, Esther M

    2018-01-01

    In response to the challenge of military traumatic brain injury and posttraumatic stress disorder, the US military developed a wide range of holistic care modalities at the new Walter Reed National Military Medical Center, Bethesda, MD, from 2001 to 2017, guided by civilian expert consultation via the Epidaurus Project. These projects spanned a range from healing buildings to wellness initiatives and healing through nature, spirituality, and the arts. The next challenge was to develop whole-body metrics to guide the use of these therapies in clinical care. Under the "Epidaurus 2" Project, a national search produced 5 advanced metrics for measuring whole-body therapeutic effects: genomics, integrated stress biomarkers, language analysis, machine learning, and "Star Glyphs." This article describes the metrics, their current use in guiding holistic care at Walter Reed, and their potential for operationalizing personalized care, patient self-management, and the improvement of public health. Development of these metrics allows the scientific integration of holistic therapies with organ-system-based care, expanding the powers of medicine.

  6. Technical Interchange Meeting Guidelines Breakout

    NASA Technical Reports Server (NTRS)

    Fong, Rob

    2002-01-01

    Along with concept developers, the Systems Evaluation and Assessment (SEA) sub-element of VAMS will develop those scenarios and metrics required for testing the new concepts that reside within the System-Level Integrated Concepts (SLIC) sub-element in the VAMS project. These concepts will come from the NRA process, space act agreements, a university group, and other NASA researchers. The emphasis of those concepts is to increase capacity while at least maintaining the current safety level. The concept providers will initially develop their own scenarios and metrics for self-evaluation. In about a year, the SEA sub-element will become responsible for conducting initial evaluations of the concepts using a common scenario and metric set. This set may derive many components from the scenarios and metrics used by the concept providers. Ultimately, the common scenario\\metric set will be used to help determine the most feasible and beneficial concepts. A set of 15 questions and issues, discussed below, pertaining to the scenario and metric set, and its use for assessing concepts, was submitted by the SEA sub-element for consideration during the breakout session. The questions were divided among the three breakout groups. Each breakout group deliberated on its set of questions and provided a report on its discussion.

  7. Force-Sensing Enhanced Simulation Environment (ForSense) for laparoscopic surgery training and assessment.

    PubMed

    Cundy, Thomas P; Thangaraj, Evelyn; Rafii-Tari, Hedyeh; Payne, Christopher J; Azzie, Georges; Sodergren, Mikael H; Yang, Guang-Zhong; Darzi, Ara

    2015-04-01

    Excessive or inappropriate tissue interaction force during laparoscopic surgery is a recognized contributor to surgical error, especially for robotic surgery. Measurement of force at the tool-tissue interface is, therefore, a clinically relevant skill assessment variable that may improve effectiveness of surgical simulation. Popular box trainer simulators lack the necessary technology to measure force. The aim of this study was to develop a force sensing unit that may be integrated easily with existing box trainer simulators and to (1) validate multiple force variables as objective measurements of laparoscopic skill, and (2) determine concurrent validity of a revised scoring metric. A base plate unit sensitized to a force transducer was retrofitted to a box trainer. Participants of 3 different levels of operative experience performed 5 repetitions of a peg transfer and suture task. Multiple outcome variables of force were assessed as well as a revised scoring metric that incorporated a penalty for force error. Mean, maximum, and overall magnitudes of force were significantly different among the 3 levels of experience, as well as force error. Experts were found to exert the least force and fastest task completion times, and vice versa for novices. Overall magnitude of force was the variable most correlated with experience level and task completion time. The revised scoring metric had similar predictive strength for experience level compared with the standard scoring metric. Current box trainer simulators can be adapted for enhanced objective measurements of skill involving force sensing. These outcomes are significantly influenced by level of expertise and are relevant to operative safety in laparoscopic surgery. Conventional proficiency standards that focus predominantly on task completion time may be integrated with force-based outcomes to be more accurately reflective of skill quality. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Can vehicle longitudinal jerk be used to identify aggressive drivers? An examination using naturalistic driving data.

    PubMed

    Feng, Fred; Bao, Shan; Sayer, James R; Flannagan, Carol; Manser, Michael; Wunderlich, Robert

    2017-07-01

    This paper investigated the characteristics of vehicle longitudinal jerk (change rate of acceleration with respect to time) by using vehicle sensor data from an existing naturalistic driving study. The main objective was to examine whether vehicle jerk contains useful information that could be potentially used to identify aggressive drivers. Initial investigation showed that there are unique characteristics of vehicle jerk in drivers' gas and brake pedal operations. Thus two jerk-based metrics were examined: (1) driver's frequency of using large positive jerk when pressing the gas pedal, and (2) driver's frequency of using large negative jerk when pressing the brake pedal. To validate the performance of the two metrics, drivers were firstly divided into an aggressive group and a normal group using three classification methods (1) traveling at excessive speed (speeding), (2) following too closely to a front vehicle (tailgating), and (3) their association with crashes or near-crashes in the dataset. The results show that those aggressive drivers defined using any of the three methods above were associated with significantly higher values of the two jerk-based metrics. Between the two metrics the frequency of using large negative jerk seems to have better performance in identifying aggressive drivers. A sensitivity analysis shows the findings were largely consistent with varying parameters in the analysis. The potential applications of this work include developing quantitative surrogate safety measures to identify aggressive drivers and aggressive driving, which could be potentially used to, for example, provide real-time or post-ride performance feedback to the drivers, or warn the surrounding drivers or vehicles using the connected vehicle technologies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Installed Cost Benchmarks and Deployment Barriers for Residential Solar Photovoltaics with Energy Storage: Q1 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ardani, Kristen; O'Shaughnessy, Eric; Fu, Ran

    2016-12-01

    In this report, we fill a gap in the existing knowledge about PV-plus-storage system costs and value by providing detailed component- and system-level installed cost benchmarks for residential systems. We also examine other barriers to increased deployment of PV-plus-storage systems in the residential sector. The results are meant to help technology manufacturers, installers, and other stakeholders identify cost-reduction opportunities and inform decision makers about regulatory, policy, and market characteristics that impede solar plus storage deployment. In addition, our periodic cost benchmarks will document progress in cost reductions over time. To analyze costs for PV-plus-storage systems deployed in the first quartermore » of 2016, we adapt the National Renewable Energy Laboratory's component- and system-level cost-modeling methods for standalone PV. In general, we attempt to model best-in-class installation techniques and business operations from an installed-cost perspective. In addition to our original analysis, model development, and review of published literature, we derive inputs for our model and validate our draft results via interviews with industry and subject-matter experts. One challenge to analyzing the costs of PV-plus-storage systems is choosing an appropriate cost metric. Unlike standalone PV, energy storage lacks universally accepted cost metrics, such as dollars per watt of installed capacity and lifetime levelized cost of energy. We explain the difficulty of arriving at a standard approach for reporting storage costs and then provide the rationale for using the total installed costs of a standard PV-plus-storage system as our primary metric, rather than using a system-size-normalized metric.« less

  10. PROCESS TRANSFER FUNCTIONS TO RELATE STREAM ECOLOGICAL CONDITION METRICS TO NITRATE RETENTION

    EPA Science Inventory

    Ecologists have developed hydrological metrics to characterize the nutrient processing capability of streams. In most cases these are used qualitatively to draw inferences on ecological function. In this work, several of these metrics have been integrated in a nonsteady state adv...

  11. Traffic data collection and anonymous vehicle detection using wireless sensor networks : research summary.

    DOT National Transportation Integrated Search

    2012-05-01

    Problem: : Most Intelligent Transportation System (ITS) applications require distributed : acquisition of various traffic metrics such as traffic speed, volume, and density. : The existing measurement technologies, such as inductive loops, infrared, ...

  12. Using fish communities to assess streams in Romania: Initial development of an index of biotic integrity

    USGS Publications Warehouse

    Angermeier, P.L.; Davideanu, G.

    2004-01-01

    Multimetric biotic indices increasingly are used to complement physicochemical data in assessments of stream quality. We initiated development of multimetric indices, based on fish communities, to assess biotic integrity of streams in two physiographic regions of central Romania. Unlike previous efforts to develop such indices for European streams, our metrics and scoring criteria were selected largely on the basis of empirical relations in the regions of interest. We categorised 54 fish species with respect to ten natural-history attributes, then used this information to compute 32 candidate metrics of five types (taxonomic, tolerance, abundance, reproductive, and feeding) for each of 35 sites. We assessed the utility of candidate metrics for detecting anthropogenic impact based on three criteria: (a) range of values taken, (b) relation to a site-quality index (SQI), which incorporated information on hydrologic alteration, channel alteration, land-use intensity, and water chemistry, and (c) metric redundancy. We chose seven metrics from each region to include in preliminary multimetric indices (PMIs). Both PMIs included taxonomic, tolerance, and feeding metrics, but only two metrics were common to both PMIs. Although we could not validate our PMIs, their strong association with the SQI in each region suggests that such indices would be valuable tools for assessing stream quality and could provide more comprehensive assessments than the traditional approaches based solely on water chemistry.

  13. Tile-based rigidization surface parametric design study

    NASA Astrophysics Data System (ADS)

    Giner Munoz, Laura; Luntz, Jonathan; Brei, Diann; Kim, Wonhee

    2018-03-01

    Inflatable technologies have proven useful in consumer goods as well as in more recent applications including civil structures, aerospace, medical, and robotics. However, inflatable technologies are typically lacking in their ability to provide rigid structural support. Particle jamming improves upon this by providing structures which are normally flexible and moldable but become rigid when air is removed. Because these are based on an airtight bladder filled with loose particles, they always occupy the full volume of its rigid state, even when not rigidized. More recent developments in layer jamming have created thin, compact rigidizing surfaces replacing the loose volume of particles with thinly layered surface materials. Work in this area has been applied to several specific applications with positive results but have not generally provided the broader understanding of the rigidization performance as a function of design parameters required for directly adapting layer rigidization technology to other applications. This paper presents a parametric design study of a new layer jamming vacuum rigidization architecture: tile-based vacuum rigidization. This form of rigidization is based on layers of tiles contained within a thin vacuum bladder which can be bent, rolled, or otherwise compactly stowed, but when deployed flat, can be vacuumed and form a large, flat, rigid plate capable of supporting large forces both localized and distributed over the surface. The general architecture and operation detailing rigidization and compliance mechanisms is introduced. To quantitatively characterize the rigidization behavior, prototypes rigidization surfaces are fabricated and an experimental technique is developed based on a 3-point bending test. Performance evaluation metrics are developed to describe the stiffness, load-bearing capacity, and internal slippage of tested prototypes. A set of experimental parametric studies are performed to better understand the impact of variations in geometric design parameters, operating parameters, and architectural variations on the performance evaluation metrics. The results of this study bring insight into the rigidization behavior of this architecture, and provide design guidelines and expose tradeoffs to form the basis for the design of tile-based rigidization surfaces for a wide range of applications.

  14. The metrics and correlates of physician migration from Africa.

    PubMed

    Arah, Onyebuchi A

    2007-05-17

    Physician migration from poor to rich countries is considered an important contributor to the growing health workforce crisis in the developing world. This is particularly true for Africa. The perceived magnitude of such migration for each source country might, however, depend on the choice of metrics used in the analysis. This study examined the influence of choice of migration metrics on the rankings of African countries that suffered the most physician migration, and investigated the correlates of physician migration. Ranking and correlational analyses were conducted on African physician migration data adjusted for bilateral net flows, and supplemented with developmental, economic and health system data. The setting was the 53 African birth countries of African-born physicians working in nine wealthier destination countries. Three metrics of physician migration were used: total number of physician émigrés; emigration fraction defined as the proportion of the potential physician pool working in destination countries; and physician migration density defined as the number of physician émigrés per 1000 population of the African source country. Rankings based on any of the migration metrics differed substantially from those based on the other two metrics. Although the emigration fraction and physician migration density metrics gave proportionality to the migration crisis, only the latter was consistently associated with source countries' workforce capacity, health, health spending, economic and development characteristics. As such, higher physician migration density was seen among African countries with relatively higher health workforce capacity (0.401 < or = r < or = 0.694, p < or = 0.011), health status, health spending, and development. The perceived magnitude of physician migration is sensitive to the choice of metrics. Complementing the emigration fraction, the physician migration density is a metric which gives a different but proportionate picture of which African countries stand to lose relatively more of its physicians with unchecked migration. The nature of health policies geared at health-worker migration can be expected to depend on the choice of migration metrics.

  15. Workshop summary: 'Integrating air quality and climate mitigation - is there a need for new metrics to support decision making?'

    NASA Astrophysics Data System (ADS)

    von Schneidemesser, E.; Schmale, J.; Van Aardenne, J.

    2013-12-01

    Air pollution and climate change are often treated at national and international level as separate problems under different regulatory or thematic frameworks and different policy departments. With air pollution and climate change being strongly linked with regard to their causes, effects and mitigation options, the integration of policies that steer air pollutant and greenhouse gas emission reductions might result in cost-efficient, more effective and thus more sustainable tackling of the two problems. To support informed decision making and to work towards an integrated air quality and climate change mitigation policy requires the identification, quantification and communication of present-day and potential future co-benefits and trade-offs. The identification of co-benefits and trade-offs requires the application of appropriate metrics that are well rooted in science, easy to understand and reflect the needs of policy, industry and the public for informed decision making. For the purpose of this workshop, metrics were loosely defined as a quantified measure of effect or impact used to inform decision-making and to evaluate mitigation measures. The workshop held on October 9 and 10 and co-organized between the European Environment Agency and the Institute for Advanced Sustainability Studies brought together representatives from science, policy, NGOs, and industry to discuss whether current available metrics are 'fit for purpose' or whether there is a need to develop alternative metrics or reassess the way current metrics are used and communicated. Based on the workshop outcome the presentation will (a) summarize the informational needs and current application of metrics by the end-users, who, depending on their field and area of operation might require health, policy, and/or economically relevant parameters at different scales, (b) provide an overview of the state of the science of currently used and newly developed metrics, and the scientific validity of these metrics, (c) identify gaps in the current information base, whether from the scientific development of metrics or their application by different users.

  16. Flight Validation of a Metrics Driven L(sub 1) Adaptive Control

    NASA Technical Reports Server (NTRS)

    Dobrokhodov, Vladimir; Kitsios, Ioannis; Kaminer, Isaac; Jones, Kevin D.; Xargay, Enric; Hovakimyan, Naira; Cao, Chengyu; Lizarraga, Mariano I.; Gregory, Irene M.

    2008-01-01

    The paper addresses initial steps involved in the development and flight implementation of new metrics driven L1 adaptive flight control system. The work concentrates on (i) definition of appropriate control driven metrics that account for the control surface failures; (ii) tailoring recently developed L1 adaptive controller to the design of adaptive flight control systems that explicitly address these metrics in the presence of control surface failures and dynamic changes under adverse flight conditions; (iii) development of a flight control system for implementation of the resulting algorithms onboard of small UAV; and (iv) conducting a comprehensive flight test program that demonstrates performance of the developed adaptive control algorithms in the presence of failures. As the initial milestone the paper concentrates on the adaptive flight system setup and initial efforts addressing the ability of a commercial off-the-shelf AP with and without adaptive augmentation to recover from control surface failures.

  17. Development of Quality Metrics in Ambulatory Pediatric Cardiology.

    PubMed

    Chowdhury, Devyani; Gurvitz, Michelle; Marelli, Ariane; Anderson, Jeffrey; Baker-Smith, Carissa; Diab, Karim A; Edwards, Thomas C; Hougen, Tom; Jedeikin, Roy; Johnson, Jonathan N; Karpawich, Peter; Lai, Wyman; Lu, Jimmy C; Mitchell, Stephanie; Newburger, Jane W; Penny, Daniel J; Portman, Michael A; Satou, Gary; Teitel, David; Villafane, Juan; Williams, Roberta; Jenkins, Kathy

    2017-02-07

    The American College of Cardiology Adult Congenital and Pediatric Cardiology (ACPC) Section had attempted to create quality metrics (QM) for ambulatory pediatric practice, but limited evidence made the process difficult. The ACPC sought to develop QMs for ambulatory pediatric cardiology practice. Five areas of interest were identified, and QMs were developed in a 2-step review process. In the first step, an expert panel, using the modified RAND-UCLA methodology, rated each QM for feasibility and validity. The second step sought input from ACPC Section members; final approval was by a vote of the ACPC Council. Work groups proposed a total of 44 QMs. Thirty-one metrics passed the RAND process and, after the open comment period, the ACPC council approved 18 metrics. The project resulted in successful development of QMs in ambulatory pediatric cardiology for a range of ambulatory domains. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  18. Green Chemistry Metrics with Special Reference to Green Analytical Chemistry.

    PubMed

    Tobiszewski, Marek; Marć, Mariusz; Gałuszka, Agnieszka; Namieśnik, Jacek

    2015-06-12

    The concept of green chemistry is widely recognized in chemical laboratories. To properly measure an environmental impact of chemical processes, dedicated assessment tools are required. This paper summarizes the current state of knowledge in the field of development of green chemistry and green analytical chemistry metrics. The diverse methods used for evaluation of the greenness of organic synthesis, such as eco-footprint, E-Factor, EATOS, and Eco-Scale are described. Both the well-established and recently developed green analytical chemistry metrics, including NEMI labeling and analytical Eco-scale, are presented. Additionally, this paper focuses on the possibility of the use of multivariate statistics in evaluation of environmental impact of analytical procedures. All the above metrics are compared and discussed in terms of their advantages and disadvantages. The current needs and future perspectives in green chemistry metrics are also discussed.

  19. Improvement of impact noise in a passenger car utilizing sound metric based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Kwon; Kim, Ho-Wuk; Na, Eun-Woo

    2010-08-01

    A new sound metric for impact sound is developed based on the continuous wavelet transform (CWT), a useful tool for the analysis of non-stationary signals such as impact noise. Together with new metric, two other conventional sound metrics related to sound modulation and fluctuation are also considered. In all, three sound metrics are employed to develop impact sound quality indexes for several specific impact courses on the road. Impact sounds are evaluated subjectively by 25 jurors. The indexes are verified by comparing the correlation between the index output and results of a subjective evaluation based on a jury test. These indexes are successfully applied to an objective evaluation for improvement of the impact sound quality for cases where some parts of the suspension system of the test car are modified.

  20. New tools for classification and monitoring of autoimmune diseases

    PubMed Central

    Maecker, Holden T.; Lindstrom, Tamsin M.; Robinson, William H.; Utz, Paul J.; Hale, Matthew; Boyd, Scott D.; Shen-Orr, Shai S.; Fathman, C. Garrison

    2012-01-01

    Rheumatologists see patients with a range of autoimmune diseases. Phenotyping these diseases for diagnosis, prognosis and selection of therapies is an ever increasing problem. Advances in multiplexed assay technology at the gene, protein, and cellular level have enabled the identification of `actionable biomarkers'; that is, biological metrics that can inform clinical practice. Not only will such biomarkers yield insight into the development, remission, and exacerbation of a disease, they will undoubtedly improve diagnostic sensitivity and accuracy of classification, and ultimately guide treatment. This Review provides an introduction to these powerful technologies that could promote the identification of actionable biomarkers, including mass cytometry, protein arrays, and immunoglobulin and T-cell receptor high-throughput sequencing. In our opinion, these technologies should become part of routine clinical practice for the management of autoimmune diseases. The use of analytical tools to deconvolve the data obtained from use of these technologies is also presented here. These analyses are revealing a more comprehensive and interconnected view of the immune system than ever before and should have an important role in directing future treatment approaches for autoimmune diseases. PMID:22647780

  1. 2014 Summer Series - Ethiraj Venkatapathy - Mary Poppins Approach to Human Mars Mission Entry, Descent and Landing

    NASA Image and Video Library

    2014-06-17

    NASA is investing in a number of technologies to extend Entry, Descent and Landing (EDL) capabilities to enable Human Missions to Mars. These technologies will also enable robotic Science missions. Human missions will require landing payloads of 10?s of metric tons, not possible with today's technology. Decelerating from entry speeds around 15,000 miles per hour to landing in a matter of minutes will require very large drag or deceleration. The one way to achieve required deceleration is to deploy a large surface that can be stowed during launch and deployed prior to entry. This talk will highlight a simple concept similar to an umbrella. Though the concept is simple, the size required for human Mars missions and the heating encountered during entry are significant challenges. The mechanically deployable system can also enable robotic science missions to Venus and is also equally applicable for bringing back cube-satellites and other small payloads. The scalable concept called Adaptive Deployable Entry and Placement Technology (ADEPT) is under development and is the focus of this talk.

  2. Why Physics in Medicine?

    PubMed

    Samei, Ehsan; Grist, Thomas M

    2018-05-18

    Despite its crucial role in the development of new medical imaging technologies, in clinical practice, physics has primarily been involved in the technical evaluation of technologies. However, this narrow role is no longer adequate. New trajectories in medicine call for a stronger role for physics in the clinic. The movement toward evidence-based, quantitative, and value-based medicine requires physicists to play a more integral role in delivering innovative precision care through the intentional clinical application of physical sciences. There are three aspects of this clinical role: technology assessment based on metrics as they relate to expected clinical performance, optimized use of technologies for patient-centered clinical outcomes, and retrospective analysis of imaging operations to ensure attainment of expectations in terms of quality and variability. These tasks fuel the drive toward high-quality, consistent practice of medical imaging that is patient centered, evidence based, and safe. While this particular article focuses on imaging, this trajectory and paradigm is equally applicable to the multitudes of the applications of physics in medicine. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  3. 22 CFR 226.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Metric system of measurement. 226.15 Section 226.15 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT ADMINISTRATION OF ASSISTANCE AWARDS TO U.S. NON-GOVERNMENTAL ORGANIZATIONS Pre-award Requirements § 226.15 Metric system of measurement. (a...

  4. Measuring Sustainability of Urban Systems: An Exploration based on Two Metrics for the Chicago Metropolitan Area

    EPA Science Inventory

    This presentation is comprised of two sustainability metrics that have been developed for the Chicago Metropolitan Area under SHC research program. The first sustainability metrics is Ecological Foot Print Analysis. Ecological Footprint Analysis (EFA) has been extensively deploy...

  5. App Usage Factor: A Simple Metric to Compare the Population Impact of Mobile Medical Apps.

    PubMed

    Lewis, Thomas Lorchan; Wyatt, Jeremy C

    2015-08-19

    One factor when assessing the quality of mobile apps is quantifying the impact of a given app on a population. There is currently no metric which can be used to compare the population impact of a mobile app across different health care disciplines. The objective of this study is to create a novel metric to characterize the impact of a mobile app on a population. We developed the simple novel metric, app usage factor (AUF), defined as the logarithm of the product of the number of active users of a mobile app with the median number of daily uses of the app. The behavior of this metric was modeled using simulated modeling in Python, a general-purpose programming language. Three simulations were conducted to explore the temporal and numerical stability of our metric and a simulated app ecosystem model using a simulated dataset of 20,000 apps. Simulations confirmed the metric was stable between predicted usage limits and remained stable at extremes of these limits. Analysis of a simulated dataset of 20,000 apps calculated an average value for the app usage factor of 4.90 (SD 0.78). A temporal simulation showed that the metric remained stable over time and suitable limits for its use were identified. A key component when assessing app risk and potential harm is understanding the potential population impact of each mobile app. Our metric has many potential uses for a wide range of stakeholders in the app ecosystem, including users, regulators, developers, and health care professionals. Furthermore, this metric forms part of the overall estimate of risk and potential for harm or benefit posed by a mobile medical app. We identify the merits and limitations of this metric, as well as potential avenues for future validation and research.

  6. Economic Incentives for Cybersecurity: Using Economics to Design Technologies Ready for Deployment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishik, Claire; Sheldon, Frederick T; Ott, David

    Cybersecurity practice lags behind cyber technology achievements. Solutions designed to address many problems may and do exist but frequently cannot be broadly deployed due to economic constraints. Whereas security economics focuses on the cost/benefit analysis and supply/demand, we believe that more sophisticated theoretical approaches, such as economic modeling, rarely utilized, would derive greater societal benefits. Unfortunately, today technologists pursuing interesting and elegant solutions have little knowledge of the feasibility for broad deployment of their results and cannot anticipate the influences of other technologies, existing infrastructure, and technology evolution, nor bring the solutions lifecycle into the equation. Additionally, potentially viable solutionsmore » are not adopted because the risk perceptions by potential providers and users far outweighs the economic incentives to support introduction/adoption of new best practices and technologies that are not well enough defined. In some cases, there is no alignment with redominant and future business models as well as regulatory and policy requirements. This paper provides an overview of the economics of security, reviewing work that helped to define economic models for the Internet economy from the 1990s. We bring forward examples of potential use of theoretical economics in defining metrics for emerging technology areas, positioning infrastructure investment, and building real-time response capability as part of software development. These diverse examples help us understand the gaps in current research. Filling these gaps will be instrumental for defining viable economic incentives, economic policies, regulations as well as early-stage technology development approaches, that can speed up commercialization and deployment of new technologies in cybersecurity.« less

  7. Methodology for assessing laser-based equipment

    NASA Astrophysics Data System (ADS)

    Pelegrina-Bonilla, Gabriel; Hermsdorf, Jörg; Thombansen, Ulrich; Abels, Peter; Kaierle, Stefan; Neumann, Jörg

    2017-10-01

    Methodologies for the assessment of technology's maturity are widely used in industry and research. Probably the best known are technology readiness levels (TRLs), initially pioneered by the National Aeronautics and Space Administration (NASA). At the beginning, only descriptively defined TRLs existed, but over time, automated assessment techniques in the form of questionnaires emerged in order to determine TRLs. Originally TRLs targeted equipment for space applications, but the demands on industrial relevant equipment are partly different in terms of, for example, overall costs, product quantities, or the presence of competitors. Therefore, we present a commonly valid assessment methodology with the aim of assessing laser-based equipment for industrial use, in general. The assessment is carried out with the help of a questionnaire, which allows for a user-friendly and easy accessible way to monitor the progress from the lab-proven state to the application-ready product throughout the complete development period. The assessment result is presented in a multidimensional metric in order to reveal the current specific strengths and weaknesses of the equipment development process, which can be used to direct the remaining development process of the equipment in the right direction.

  8. Information technology model for evaluating emergency medicine teaching

    NASA Astrophysics Data System (ADS)

    Vorbach, James; Ryan, James

    1996-02-01

    This paper describes work in progress to develop an Information Technology (IT) model and supporting information system for the evaluation of clinical teaching in the Emergency Medicine (EM) Department of North Shore University Hospital. In the academic hospital setting student physicians, i.e. residents, and faculty function daily in their dual roles as teachers and students respectively, and as health care providers. Databases exist that are used to evaluate both groups in either academic or clinical performance, but rarely has this information been integrated to analyze the relationship between academic performance and the ability to care for patients. The goal of the IT model is to improve the quality of teaching of EM physicians by enabling the development of integrable metrics for faculty and resident evaluation. The IT model will include (1) methods for tracking residents in order to develop experimental databases; (2) methods to integrate lecture evaluation, clinical performance, resident evaluation, and quality assurance databases; and (3) a patient flow system to monitor patient rooms and the waiting area in the Emergency Medicine Department, to record and display status of medical orders, and to collect data for analyses.

  9. Story - Science - Solutions: A new middle school science curriculum that promotes climate-stewardship

    NASA Astrophysics Data System (ADS)

    Cordero, E.; Centeno Delgado, D. C.

    2017-12-01

    Over the last five years, Green Ninja has been developing educational media to help motivate student interest and engagement around climate science and solutions. The adoption of the Next Generation Science Standards (NGSS) offers a unique opportunity where schools are changing both what they teach in a science class and how they teach. Inspired by the new emphasis in NGSS on climate change, human impact and engineering design, Green Ninja developed a technology focused, integrative, and yearlong science curriculum (6th, 7th and 8th grade) focused broadly around solutions to environmental problems. The use of technology supports the development of skills valuable for students, while also offering real-time metrics to help measure both student learning and environmental impact of student actions. During the presentation, we will describe the design philosophy around our middle school curriculum and share data from a series of classes that have created environmental benefits that transcend the traditional classroom. The notion that formal education, if done correctly, can be leveraged as a viable climate mitigation strategy will be discussed.

  10. The Steinberg-Bernstein Centre for Minimally Invasive Surgery at McGill University.

    PubMed

    Fried, Gerald M

    2005-12-01

    Surgical skills and simulation centers have been developed in recent years to meet the educational needs of practicing surgeons, residents, and students. The rapid pace of innovation in surgical procedures and technology, as well as the overarching desire to enhance patient safety, have driven the development of simulation technology and new paradigms for surgical education. McGill University has implemented an innovative approach to surgical education in the field of minimally invasive surgery. The goal is to measure surgical performance in the operating room using practical, reliable, and valid metrics, which allow the educational needs of the learner to be established and enable feedback and performance to be tracked over time. The GOALS system and the MISTELS program have been developed to measure operative performance and minimally invasive surgical technical skills in the inanimate skills lab, respectively. The MISTELS laparoscopic simulation-training program has been incorporated as the manual skills education and evaluation component of the Fundamentals of Laparoscopic Surgery program distributed by the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES) and the American College of Surgeons.

  11. The analysis of MAI in large scale MIMO-CDMA system

    NASA Astrophysics Data System (ADS)

    Berceanu, Madalina-Georgiana; Voicu, Carmen; Halunga, Simona

    2016-12-01

    Recently, technological development imposed a rapid growth in the use of data carried by cellular services, which also implies the necessity of higher data rates and lower latency. To meet the users' demands, it was brought into discussion a series of new data processing techniques. In this paper, we approached the MIMO technology that uses multiple antennas at the receiver and transmitter ends. To study the performances obtained by this technology, we proposed a MIMO-CDMA system, where image transmission has been used instead of random data transmission to take benefit of a larger range of quality indicators. In the simulations we increased the number of antennas, we observed how the performances of the system are modified and, based on that, we were able to make a comparison between a conventional MIMO and a Large Scale MIMO system, in terms of BER and MSSIM index, which is a metric that compares the quality of the image before transmission with the received one.

  12. Expert Consensus on Metrics to Assess the Impact of Patient-Level Antimicrobial Stewardship Interventions in Acute-Care Settings

    PubMed Central

    Anderson, Deverick J.; Cochran, Ronda L.; Hicks, Lauri A.; Srinivasan, Arjun; Dodds Ashley, Elizabeth S.

    2017-01-01

    Antimicrobial stewardship programs (ASPs) positively impact patient care, but metrics to assess ASP impact are poorly defined. We used a modified Delphi approach to select relevant metrics for assessing patient-level interventions in acute-care settings for the purposes of internal program decision making. An expert panel rated 90 candidate metrics on a 9-point Likert scale for association with 4 criteria: improved antimicrobial prescribing, improved patient care, utility in targeting stewardship efforts, and feasibility in hospitals with electronic health records. Experts further refined, added, or removed metrics during structured teleconferences and re-rated the retained metrics. Six metrics were rated >6 in all criteria: 2 measures of Clostridium difficile incidence, incidence of drug-resistant pathogens, days of therapy over admissions, days of therapy over patient days, and redundant therapy events. Fourteen metrics rated >6 in all criteria except feasibility were identified as targets for future development. PMID:27927866

  13. Power, Performance, and Perception (P3): Integrating Usability Metrics and Technology Acceptance Determinants to Validate a New Model for Predicting System Usage

    DTIC Science & Technology

    1999-12-01

    Ajzen , I . and M . Fishbein . Understanding Attitudes and Predicting Social Behavior. ’ Prentice-Hall, Englewood Cliffs, NJ: 1980. Alwang, Greg. "Speech...Decline: Computer Introduction in the Financial Industry." Technology Forecasting and Social Change. 31: 143-154. Fishbein , M . and I . Ajzen . Belief...Theory of Reasoned Action (TRA) ( Fishbein and Ajzen , 1980) 13 3. Theory of Planned Behavior (TPB) ( Ajzen , 1991) 15 4. Technology Acceptance Model

  14. Implementation of microchip electrophoresis instrumentation for future spaceflight missions.

    PubMed

    Willis, Peter A; Creamer, Jessica S; Mora, Maria F

    2015-09-01

    We present a comprehensive discussion of the role that microchip electrophoresis (ME) instrumentation could play in future NASA missions of exploration, as well as the current barriers that must be overcome to make this type of chemical investigation possible. We describe how ME would be able to fill fundamental gaps in our knowledge of the potential for past, present, or future life beyond Earth. Despite the great promise of ME for ultrasensitive portable chemical analysis, to date, it has never been used on a robotic mission of exploration to another world. We provide a current snapshot of the technology readiness level (TRL) of ME instrumentation, where the TRL is the NASA systems engineering metric used to evaluate the maturity of technology, and its fitness for implementation on missions. We explain how the NASA flight implementation process would apply specifically to ME instrumentation, and outline the scientific and technology development issues that must be addressed for ME analyses to be performed successfully on another world. We also outline research demonstrations that could be accomplished by independent researchers to help advance the TRL of ME instrumentation for future exploration missions. The overall approach described here for system development could be readily applied to a wide range of other instrumentation development efforts having broad societal and commercial impact.

  15. New Objective Refraction Metric Based on Sphere Fitting to the Wavefront

    PubMed Central

    Martínez-Finkelshtein, Andreí

    2017-01-01

    Purpose To develop an objective refraction formula based on the ocular wavefront error (WFE) expressed in terms of Zernike coefficients and pupil radius, which would be an accurate predictor of subjective spherical equivalent (SE) for different pupil sizes. Methods A sphere is fitted to the ocular wavefront at the center and at a variable distance, t. The optimal fitting distance, topt, is obtained empirically from a dataset of 308 eyes as a function of objective refraction pupil radius, r0, and used to define the formula of a new wavefront refraction metric (MTR). The metric is tested in another, independent dataset of 200 eyes. Results For pupil radii r0 ≤ 2 mm, the new metric predicts the equivalent sphere with similar accuracy (<0.1D), however, for r0 > 2 mm, the mean error of traditional metrics can increase beyond 0.25D, and the MTR remains accurate. The proposed metric allows clinicians to obtain an accurate clinical spherical equivalent value without rescaling/refitting of the wavefront coefficients. It has the potential to be developed into a metric which will be able to predict full spherocylindrical refraction for the desired illumination conditions and corresponding pupil size. PMID:29104804

  16. New Objective Refraction Metric Based on Sphere Fitting to the Wavefront.

    PubMed

    Jaskulski, Mateusz; Martínez-Finkelshtein, Andreí; López-Gil, Norberto

    2017-01-01

    To develop an objective refraction formula based on the ocular wavefront error (WFE) expressed in terms of Zernike coefficients and pupil radius, which would be an accurate predictor of subjective spherical equivalent (SE) for different pupil sizes. A sphere is fitted to the ocular wavefront at the center and at a variable distance, t . The optimal fitting distance, t opt , is obtained empirically from a dataset of 308 eyes as a function of objective refraction pupil radius, r 0 , and used to define the formula of a new wavefront refraction metric (MTR). The metric is tested in another, independent dataset of 200 eyes. For pupil radii r 0 ≤ 2 mm, the new metric predicts the equivalent sphere with similar accuracy (<0.1D), however, for r 0 > 2 mm, the mean error of traditional metrics can increase beyond 0.25D, and the MTR remains accurate. The proposed metric allows clinicians to obtain an accurate clinical spherical equivalent value without rescaling/refitting of the wavefront coefficients. It has the potential to be developed into a metric which will be able to predict full spherocylindrical refraction for the desired illumination conditions and corresponding pupil size.

  17. Quality metrics in high-dimensional data visualization: an overview and systematization.

    PubMed

    Bertini, Enrico; Tatu, Andrada; Keim, Daniel

    2011-12-01

    In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research. © 2010 IEEE

  18. Metrication report to the Congress

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The major NASA metrication activity of 1988 concerned the Space Station. Although the metric system was the baseline measurement system for preliminary design studies, solicitations for final design and development of the Space Station Freedom requested use of the inch-pound system because of concerns with cost impact and potential safety hazards. Under that policy, however use of the metric system would be permitted through waivers where its use was appropriate. Late in 1987, several Department of Defense decisions were made to increase commitment to the metric system, thereby broadening the potential base of metric involvement in the U.S. industry. A re-evaluation of Space Station Freedom units of measure policy was, therefore, initiated in January 1988.

  19. First benchmark of the Unstructured Grid Adaptation Working Group

    NASA Technical Reports Server (NTRS)

    Ibanez, Daniel; Barral, Nicolas; Krakos, Joshua; Loseille, Adrien; Michal, Todd; Park, Mike

    2017-01-01

    Unstructured grid adaptation is a technology that holds the potential to improve the automation and accuracy of computational fluid dynamics and other computational disciplines. Difficulty producing the highly anisotropic elements necessary for simulation on complex curved geometries that satisfies a resolution request has limited this technology's widespread adoption. The Unstructured Grid Adaptation Working Group is an open gathering of researchers working on adapting simplicial meshes to conform to a metric field. Current members span a wide range of institutions including academia, industry, and national laboratories. The purpose of this group is to create a common basis for understanding and improving mesh adaptation. We present our first major contribution: a common set of benchmark cases, including input meshes and analytic metric specifications, that are publicly available to be used for evaluating any mesh adaptation code. We also present the results of several existing codes on these benchmark cases, to illustrate their utility in identifying key challenges common to all codes and important differences between available codes. Future directions are defined to expand this benchmark to mature the technology necessary to impact practical simulation workflows.

  20. Use of similarity scoring in the development of oral solid dosage forms.

    PubMed

    Ferreira, Ana P; Olusanmi, Dolapo; Sprockel, Omar; Abebe, Admassu; Nikfar, Faranak; Tobyn, Mike

    2016-12-05

    In the oral solid dosage form space, material physical properties have a strong impact on the behaviour of the formulation during processing. The ability to identify materials with similar characteristics (and thus expected to exhibit similar behaviour) within the company's portfolio can help accelerate drug development by enabling early assessment and prediction of potential challenges associated with the powder properties of a new active pharmaceutical ingredient. Such developments will aid the production of robust dosage forms, in an efficient manner. Similarity scoring metrics are widely used in a number of scientific fields. This study proposes a practical implementation of this methodology within pharmaceutical development. The developed similarity metrics is based on the Mahalanobis distance. Scanning electron microscopy was used to confirm morphological similarity between the reference material and the closest matches identified by the metrics proposed. The results show that the metrics proposed are able to successfully identify material with similar physical properties. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Technological innovation in neurosurgery: a quantitative study.

    PubMed

    Marcus, Hani J; Hughes-Hallett, Archie; Kwasnicki, Richard M; Darzi, Ara; Yang, Guang-Zhong; Nandi, Dipankar

    2015-07-01

    Technological innovation within health care may be defined as the introduction of a new technology that initiates a change in clinical practice. Neurosurgery is a particularly technology-intensive surgical discipline, and new technologies have preceded many of the major advances in operative neurosurgical techniques. The aim of the present study was to quantitatively evaluate technological innovation in neurosurgery using patents and peer-reviewed publications as metrics of technology development and clinical translation, respectively. The authors searched a patent database for articles published between 1960 and 2010 using the Boolean search term "neurosurgeon OR neurosurgical OR neurosurgery." The top 50 performing patent codes were then grouped into technology clusters. Patent and publication growth curves were then generated for these technology clusters. A top-performing technology cluster was then selected as an exemplar for a more detailed analysis of individual patents. In all, 11,672 patents and 208,203 publications related to neurosurgery were identified. The top-performing technology clusters during these 50 years were image-guidance devices, clinical neurophysiology devices, neuromodulation devices, operating microscopes, and endoscopes. In relation to image-guidance and neuromodulation devices, the authors found a highly correlated rapid rise in the numbers of patents and publications, which suggests that these are areas of technology expansion. An in-depth analysis of neuromodulation-device patents revealed that the majority of well-performing patents were related to deep brain stimulation. Patent and publication data may be used to quantitatively evaluate technological innovation in neurosurgery.

  2. Developing a Metric for the Cost of Green House Gas Abatement

    DOT National Transportation Integrated Search

    2017-02-28

    The authors introduce the levelized cost of carbon (LCC), a metric that can be used to evaluate MassDOT CO2 abatement projects in terms of their cost-effectiveness. The study presents ways in which the metric can be used to rank projects. The data ar...

  3. Numerical Calabi-Yau metrics

    NASA Astrophysics Data System (ADS)

    Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, René

    2008-03-01

    We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results.

  4. Technology survey on video face tracking

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Gomes, Herman Martins

    2014-03-01

    With the pervasiveness of monitoring cameras installed in public areas, schools, hospitals, work places and homes, video analytics technologies for interpreting these video contents are becoming increasingly relevant to people's lives. Among such technologies, human face detection and tracking (and face identification in many cases) are particularly useful in various application scenarios. While plenty of research has been conducted on face tracking and many promising approaches have been proposed, there are still significant challenges in recognizing and tracking people in videos with uncontrolled capturing conditions, largely due to pose and illumination variations, as well as occlusions and cluttered background. It is especially complex to track and identify multiple people simultaneously in real time due to the large amount of computation involved. In this paper, we present a survey on literature and software that are published or developed during recent years on the face tracking topic. The survey covers the following topics: 1) mainstream and state-of-the-art face tracking methods, including features used to model the targets and metrics used for tracking; 2) face identification and face clustering from face sequences; and 3) software packages or demonstrations that are available for algorithm development or trial. A number of publically available databases for face tracking are also introduced.

  5. Propulsion IVHM Technology Experiment

    NASA Technical Reports Server (NTRS)

    Chicatelli, Amy K.; Maul, William A.; Fulton, Christopher E.

    2006-01-01

    The Propulsion IVHM Technology Experiment (PITEX) successfully demonstrated real-time fault detection and isolation of a virtual reusable launch vehicle (RLV) main propulsion system (MPS). Specifically, the PITEX research project developed and applied a model-based diagnostic system for the MPS of the X-34 RLV, a space-launch technology demonstrator. The demonstration was simulation-based using detailed models of the propulsion subsystem to generate nominal and failure scenarios during captive carry, which is the most safety-critical portion of the X-34 flight. Since no system-level testing of the X-34 Main Propulsion System (MPS) was performed, these simulated data were used to verify and validate the software system. Advanced diagnostic and signal processing algorithms were developed and tested in real time on flight-like hardware. In an attempt to expose potential performance problems, the PITEX diagnostic system was subjected to numerous realistic effects in the simulated data including noise, sensor resolution, command/valve talkback information, and nominal build variations. In all cases, the PITEX system performed as required. The research demonstrated potential benefits of model-based diagnostics, defined performance metrics required to evaluate the diagnostic system, and studied the impact of real-world challenges encountered when monitoring propulsion subsystems.

  6. In-situ resource utilization technologies for Mars life support systems.

    PubMed

    Sridhar, K R; Finn, J E; Kliss, M H

    2000-01-01

    The atmosphere of Mars has many of the ingredients that can be used to support human exploration missions. It can be "mined" and processed to produce oxygen, buffer gas, and water, resulting in significant savings on mission costs. The use of local materials, called ISRU (for in-situ resource utilization), is clearly an essential strategy for a long-term human presence on Mars from the standpoints of self-sufficiency, safety, and cost. Currently a substantial effort is underway by NASA to develop technologies and designs of chemical plants to make propellants from the Martian atmosphere. Consumables for life support, such as oxygen and water, will probably benefit greatly from this ISRU technology development for propellant production. However, the buffer gas needed to dilute oxygen for breathing is not a product of a propellant production plant. The buffer gas needs on each human Mars mission will probably be in the order of metric tons, primarily due to losses during airlock activity. Buffer gas can be separated, compressed, and purified from the Mars atmosphere. This paper discusses the buffer gas needs for a human mission to Mars and consider architectures for the generation of buffer gas including an option that integrates it to the propellant production plant.

  7. The Development of Expertise in Radiology: In Chest Radiograph Interpretation, "Expert" Search Pattern May Predate "Expert" Levels of Diagnostic Accuracy for Pneumothorax Identification.

    PubMed

    Kelly, Brendan S; Rainford, Louise A; Darcy, Sarah P; Kavanagh, Eoin C; Toomey, Rachel J

    2016-07-01

    Purpose To investigate the development of chest radiograph interpretation skill through medical training by measuring both diagnostic accuracy and eye movements during visual search. Materials and Methods An institutional exemption from full ethical review was granted for the study. Five consultant radiologists were deemed the reference expert group, and four radiology registrars, five senior house officers (SHOs), and six interns formed four clinician groups. Participants were shown 30 chest radiographs, 14 of which had a pneumothorax, and were asked to give their level of confidence as to whether a pneumothorax was present. Receiver operating characteristic (ROC) curve analysis was carried out on diagnostic decisions. Eye movements were recorded with a Tobii TX300 (Tobii Technology, Stockholm, Sweden) eye tracker. Four eye-tracking metrics were analyzed. Variables were compared to identify any differences between groups. All data were compared by using the Friedman nonparametric method. Results The average area under the ROC curve for the groups increased with experience (0.947 for consultants, 0.792 for registrars, 0.693 for SHOs, and 0.659 for interns; P = .009). A significant difference in diagnostic accuracy was found between consultants and registrars (P = .046). All four eye-tracking metrics decreased with experience, and there were significant differences between registrars and SHOs. Total reading time decreased with experience; it was significantly lower for registrars compared with SHOs (P = .046) and for SHOs compared with interns (P = .025). Conclusion Chest radiograph interpretation skill increased with experience, both in terms of diagnostic accuracy and visual search. The observed level of experience at which there was a significant difference was higher for diagnostic accuracy than for eye-tracking metrics. (©) RSNA, 2016 Online supplemental material is available for this article.

  8. Characterizing the heterogeneity of tumor tissues from spatially resolved molecular measures

    PubMed Central

    Zavodszky, Maria I.

    2017-01-01

    Background Tumor heterogeneity can manifest itself by sub-populations of cells having distinct phenotypic profiles expressed as diverse molecular, morphological and spatial distributions. This inherent heterogeneity poses challenges in terms of diagnosis, prognosis and efficient treatment. Consequently, tools and techniques are being developed to properly characterize and quantify tumor heterogeneity. Multiplexed immunofluorescence (MxIF) is one such technology that offers molecular insight into both inter-individual and intratumor heterogeneity. It enables the quantification of both the concentration and spatial distribution of 60+ proteins across a tissue section. Upon bioimage processing, protein expression data can be generated for each cell from a tissue field of view. Results The Multi-Omics Heterogeneity Analysis (MOHA) tool was developed to compute tissue heterogeneity metrics from MxIF spatially resolved tissue imaging data. This technique computes the molecular state of each cell in a sample based on a pathway or gene set. Spatial states are then computed based on the spatial arrangements of the cells as distinguished by their respective molecular states. MOHA computes tissue heterogeneity metrics from the distributions of these molecular and spatially defined states. A colorectal cancer cohort of approximately 700 subjects with MxIF data is presented to demonstrate the MOHA methodology. Within this dataset, statistically significant correlations were found between the intratumor AKT pathway state diversity and cancer stage and histological tumor grade. Furthermore, intratumor spatial diversity metrics were found to correlate with cancer recurrence. Conclusions MOHA provides a simple and robust approach to characterize molecular and spatial heterogeneity of tissues. Research projects that generate spatially resolved tissue imaging data can take full advantage of this useful technique. The MOHA algorithm is implemented as a freely available R script (see supplementary information). PMID:29190747

  9. Interlaboratory Study Characterizing a Yeast Performance Standard for Benchmarking LC-MS Platform Performance*

    PubMed Central

    Paulovich, Amanda G.; Billheimer, Dean; Ham, Amy-Joan L.; Vega-Montoto, Lorenzo; Rudnick, Paul A.; Tabb, David L.; Wang, Pei; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Clauser, Karl R.; Kinsinger, Christopher R.; Schilling, Birgit; Tegeler, Tony J.; Variyath, Asokan Mulayath; Wang, Mu; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Fenyo, David; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Mesri, Mehdi; Neubert, Thomas A.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Stein, Stephen E.; Tempst, Paul; Liebler, Daniel C.

    2010-01-01

    Optimal performance of LC-MS/MS platforms is critical to generating high quality proteomics data. Although individual laboratories have developed quality control samples, there is no widely available performance standard of biological complexity (and associated reference data sets) for benchmarking of platform performance for analysis of complex biological proteomes across different laboratories in the community. Individual preparations of the yeast Saccharomyces cerevisiae proteome have been used extensively by laboratories in the proteomics community to characterize LC-MS platform performance. The yeast proteome is uniquely attractive as a performance standard because it is the most extensively characterized complex biological proteome and the only one associated with several large scale studies estimating the abundance of all detectable proteins. In this study, we describe a standard operating protocol for large scale production of the yeast performance standard and offer aliquots to the community through the National Institute of Standards and Technology where the yeast proteome is under development as a certified reference material to meet the long term needs of the community. Using a series of metrics that characterize LC-MS performance, we provide a reference data set demonstrating typical performance of commonly used ion trap instrument platforms in expert laboratories; the results provide a basis for laboratories to benchmark their own performance, to improve upon current methods, and to evaluate new technologies. Additionally, we demonstrate how the yeast reference, spiked with human proteins, can be used to benchmark the power of proteomics platforms for detection of differentially expressed proteins at different levels of concentration in a complex matrix, thereby providing a metric to evaluate and minimize preanalytical and analytical variation in comparative proteomics experiments. PMID:19858499

  10. U.S. Department of Energy Reference Model Program RM1: Experimental Results.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, Craig; Neary, Vincent Sinclair; Gunawan, Budi

    The Reference Model Project (RMP), sponsored by the U.S. Department of Energy’s (DOE) Wind and Water Power Technologies Program within the Office of Energy Efficiency & Renewable Energy (EERE), aims at expediting industry growth and efficiency by providing nonproprietary Reference Models (RM) of MHK technology designs as study objects for opensource research and development (Neary et al. 2014a,b). As part of this program, MHK turbine models were tested in a large open channel facility at the University of Minnesota’s St. Anthony Falls Laboratory (UMN-SAFL). Reference Model 1 (RM1) is a 1:40 geometric scale dual-rotor axial flow horizontal axis device withmore » counter-rotating rotors, each with a rotor diameter dT = 0.5m. Precise blade angular position and torque measurements were synchronized with three acoustic Doppler velocimeters (ADVs) aligned with each rotor and the midpoint for RM1. Flow conditions for each case were controlled such that depth, h = 1m, and volumetric flow rate, Qw = 2.425m3s-1, resulting in a hub height velocity of approximately Uhub = 1.05ms-1 and blade chord length Reynolds numbers of Rec ≈ 3.0x105. Vertical velocity profiles collected in the wake of each device from 1 to 10 rotor diameters are used to estimate the velocity recovery and turbulent characteristics in the wake, as well as the interaction of the counter-rotating rotor wakes. The development of this high resolution laboratory investigation provides a robust dataset that enables assessing turbulence performance models and their ability to accurately predict device performance metrics, including computational fluid dynamics (CFD) models that can be used to predict turbulent inflow environments, reproduce wake velocity deficit, recovery and higher order turbulent statistics, as well as device performance metrics.« less

  11. Monitoring changes in landscape pattern: use of Ikonos and Quickbird images.

    PubMed

    Alphan, Hakan; Çelik, Nil

    2016-02-01

    This paper aimed to analyze short-term changes in landscape pattern that primarily results from building development in the east coast of Mersin Province (Turkey). Three sites were selected. Ikonos (2003) and Quickbird (2009) images for these sites were classified, and land cover transformations were quantitatively analyzed using cross-tabulation of classification results. Changes in landscape structure were assessed by comparing the calculated values of area/edge and shape metrics for the earlier and later dates. Area/edge metrics included percentage of land and edge density, while shape metrics included perimeter-area ratio, fractal dimension, and related circumscribing circle (RCC) metrics. Orchards and buildings were dominating land cover classes. Variations in patch edge, size, and shapes were also analyzed and discussed. Degradation of prime agricultural areas due to building development and implications of such development on habitat fragmentation were highlighted.

  12. [Estimating individual tree aboveground biomass of the mid-subtropical forest using airborne LiDAR technology].

    PubMed

    Liu, Feng; Tan, Chang; Lei, Pi-Feng

    2014-11-01

    Taking Wugang forest farm in Xuefeng Mountain as the research object, using the airborne light detection and ranging (LiDAR) data under leaf-on condition and field data of concomitant plots, this paper assessed the ability of using LiDAR technology to estimate aboveground biomass of the mid-subtropical forest. A semi-automated individual tree LiDAR cloud point segmentation was obtained by using condition random fields and optimization methods. Spatial structure, waveform characteristics and topography were calculated as LiDAR metrics from the segmented objects. Then statistical models between aboveground biomass from field data and these LiDAR metrics were built. The individual tree recognition rates were 93%, 86% and 60% for coniferous, broadleaf and mixed forests, respectively. The adjusted coefficients of determination (R(2)adj) and the root mean squared errors (RMSE) for the three types of forest were 0.83, 0.81 and 0.74, and 28.22, 29.79 and 32.31 t · hm(-2), respectively. The estimation capability of model based on canopy geometric volume, tree percentile height, slope and waveform characteristics was much better than that of traditional regression model based on tree height. Therefore, LiDAR metrics from individual tree could facilitate better performance in biomass estimation.

  13. Doped graphene supercapacitors.

    PubMed

    Kumar, Nanjundan Ashok; Baek, Jong-Beom

    2015-12-11

    Heteroatom-doped graphitic frameworks have received great attention in energy research, since doping endows graphitic structures with a wide spectrum of properties, especially critical for electrochemical supercapacitors, which tend to complement or compete with the current lithium-ion battery technology/devices. This article reviews the latest developments in the chemical modification/doping strategies of graphene and highlights the versatility of such heteroatom-doped graphitic structures. Their role as supercapacitor electrodes is discussed in detail. This review is specifically focused on the concept of material synthesis, techniques for electrode fabrication and metrics of performance, predominantly covering the last four years. Challenges and insights into the future research and perspectives on the development of novel electrode architectures for electrochemical supercapacitors based on doped graphene are also discussed.

  14. Doped graphene supercapacitors

    NASA Astrophysics Data System (ADS)

    Ashok Kumar, Nanjundan; Baek, Jong-Beom

    2015-12-01

    Heteroatom-doped graphitic frameworks have received great attention in energy research, since doping endows graphitic structures with a wide spectrum of properties, especially critical for electrochemical supercapacitors, which tend to complement or compete with the current lithium-ion battery technology/devices. This article reviews the latest developments in the chemical modification/doping strategies of graphene and highlights the versatility of such heteroatom-doped graphitic structures. Their role as supercapacitor electrodes is discussed in detail. This review is specifically focused on the concept of material synthesis, techniques for electrode fabrication and metrics of performance, predominantly covering the last four years. Challenges and insights into the future research and perspectives on the development of novel electrode architectures for electrochemical supercapacitors based on doped graphene are also discussed.

  15. Develop metrics of tire debris on Texas highways : technical report.

    DOT National Transportation Integrated Search

    2017-05-01

    This research effort estimated the amount, characteristics, costs, and safety implications of tire debris on Texas highways. The metrics developed by this research are based on several sources of data, including a statewide survey of debris removal p...

  16. The Validation by Measurement Theory of Proposed Object-Oriented Software Metrics

    NASA Technical Reports Server (NTRS)

    Neal, Ralph D.

    1996-01-01

    Moving software development into the engineering arena requires controllability, and to control a process, it must be measurable. Measuring the process does no good if the product is not also measured, i.e., being the best at producing an inferior product does not define a quality process. Also, not every number extracted from software development is a valid measurement. A valid measurement only results when we are able to verify that the number is representative of the attribute that we wish to measure. Many proposed software metrics are used by practitioners without these metrics ever having been validated, leading to costly but often useless calculations. Several researchers have bemoaned the lack of scientific precision in much of the published software measurement work and have called for validation of software metrics by measurement theory. This dissertation applies measurement theory to validate fifty proposed object-oriented software metrics.

  17. Metrics for Operator Situation Awareness, Workload, and Performance in Automated Separation Assurance Systems

    NASA Technical Reports Server (NTRS)

    Strybel, Thomas Z.; Vu, Kim-Phuong L.; Battiste, Vernol; Dao, Arik-Quang; Dwyer, John P.; Landry, Steven; Johnson, Walter; Ho, Nhut

    2011-01-01

    A research consortium of scientists and engineers from California State University Long Beach (CSULB), San Jose State University Foundation (SJSUF), California State University Northridge (CSUN), Purdue University, and The Boeing Company was assembled to evaluate the impact of changes in roles and responsibilities and new automated technologies, being introduced in the Next Generation Air Transportation System (NextGen), on operator situation awareness (SA) and workload. To meet these goals, consortium members performed systems analyses of NextGen concepts and airspace scenarios, and concurrently evaluated SA, workload, and performance measures to assess their appropriateness for evaluations of NextGen concepts and tools. The following activities and accomplishments were supported by the NRA: a distributed simulation, metric development, systems analysis, part-task simulations, and large-scale simulations. As a result of this NRA, we have gained a greater understanding of situation awareness and its measurement, and have shared our knowledge with the scientific community. This network provides a mechanism for consortium members, colleagues, and students to pursue research on other topics in air traffic management and aviation, thus enabling them to make greater contributions to the field

  18. Microgravity Science and Applications. Program Tasks and Bibliography for FY 1993

    NASA Technical Reports Server (NTRS)

    1994-01-01

    An annual report published by the Microgravity Science and Applications Division (MSAD) of NASA is presented. It represents a compilation of the Division's currently-funded ground, flight and Advanced Technology Development tasks. An overview and progress report for these tasks, including progress reports by principal investigators selected from the academic, industry and government communities, are provided. The document includes a listing of new bibliographic data provided by the principal investigators to reflect the dissemination of research data during FY 1993 via publications and presentations. The document also includes division research metrics and an index of the funded investigators. The document contains three sections and three appendices: Section 1 includes an introduction and metrics data, Section 2 is a compilation of the task reports in an order representative of its ground, flight or ATD status and the science discipline it represents, and Section 3 is the bibliography. The three appendices, in the order of presentation, are: Appendix A - a microgravity science acronym list, Appendix B - a list of guest investigators associated with a biotechnology task, and Appendix C - an index of the currently funded principal investigators.

  19. Be aware of the Adjusted Treatment Index.

    PubMed

    Langford, Melvyn

    2015-10-01

    The authors of the interim report relating to the Review of Operational Productivity in NHS providers, published in June of this year, are, as many will know, developing a set of Adjusted Treatment Index (ATI) metrics, and are also to publish a model of their interpretation of what an estates department should look like in terms of its operational productivity and cost. This article argues that the underlying reason for the past failures was the creation of static 'point-value' metrics similar to the ATIs proposed, and that this can only be overcome by designing and populating a series of non-linear dynamic simulation models with feedback control of an organisation's estate in relation to its asset base and condition with respect to time, together with the resultant financial capital and revenue consequences. It concludes by calling on IHEEM's Council to urgently make representation to the authors of the June 2015 report, and suggests that the Institute's members be fully involved in the design, testing, and interpretation, of the estates model and ATIs. IHEEM's Technology Platforms are ideally placed to play a central role in this.

  20. Information fusion performance evaluation for motion imagery data using mutual information: initial study

    NASA Astrophysics Data System (ADS)

    Grieggs, Samuel M.; McLaughlin, Michael J.; Ezekiel, Soundararajan; Blasch, Erik

    2015-06-01

    As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.

Top