Sample records for existing methods require

  1. ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES

    PubMed Central

    LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.

    2008-01-01

    Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508

  2. 75 FR 13 - Alternate Fracture Toughness Requirements for Protection Against Pressurized Thermal Shock Events

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-04

    ...The Nuclear Regulatory Commission (NRC) is amending its regulations to provide alternate fracture toughness requirements for protection against pressurized thermal shock (PTS) events for pressurized water reactor (PWR) pressure vessels. This final rule provides alternate PTS requirements based on updated analysis methods. This action is desirable because the existing requirements are based on unnecessarily conservative probabilistic fracture mechanics analyses. This action reduces regulatory burden for those PWR licensees who expect to exceed the existing requirements before the expiration of their licenses, while maintaining adequate safety, and may choose to comply with the final rule as an alternative to complying with the existing requirements.

  3. 14 CFR § 1251.301 - Existing facilities.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... OF HANDICAP Accessibility § 1251.301 Existing facilities. (a) Accessibility. A recipient shall... entirety it is readily accessible to handicapped persons. This paragraph does not require a recipient to... handicapped persons. (b) Methods. A recipient may comply with the requirement of paragraph (a) of this section...

  4. 40 CFR 63.312 - Existing regulations and requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... National Emission Standards for Coke Oven Batteries § 63.312 Existing regulations and requirements. (a) The..., topside port lids, coke oven doors, and charging operations in effect on September 15, 1992, or which have... method of monitoring in effect on September 15, 1992, and that ensures coke oven emission reductions...

  5. 40 CFR 63.312 - Existing regulations and requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... National Emission Standards for Coke Oven Batteries § 63.312 Existing regulations and requirements. (a) The..., topside port lids, coke oven doors, and charging operations in effect on September 15, 1992, or which have... method of monitoring in effect on September 15, 1992, and that ensures coke oven emission reductions...

  6. 40 CFR 63.312 - Existing regulations and requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... National Emission Standards for Coke Oven Batteries § 63.312 Existing regulations and requirements. (a) The..., topside port lids, coke oven doors, and charging operations in effect on September 15, 1992, or which have... method of monitoring in effect on September 15, 1992, and that ensures coke oven emission reductions...

  7. 40 CFR 63.312 - Existing regulations and requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... National Emission Standards for Coke Oven Batteries § 63.312 Existing regulations and requirements. (a) The..., topside port lids, coke oven doors, and charging operations in effect on September 15, 1992, or which have... method of monitoring in effect on September 15, 1992, and that ensures coke oven emission reductions...

  8. 40 CFR 63.312 - Existing regulations and requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... National Emission Standards for Coke Oven Batteries § 63.312 Existing regulations and requirements. (a) The..., topside port lids, coke oven doors, and charging operations in effect on September 15, 1992, or which have... method of monitoring in effect on September 15, 1992, and that ensures coke oven emission reductions...

  9. 78 FR 40000 - Method for the Determination of Lead in Total Suspended Particulate Matter

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-03

    .... Purpose of the New Reference Method B. Rationale for Selection of the New Reference Method C. Comments on.../files/ambient/criteria/reference-equivalent-methods-list.pdf . C. Comments on the Proposed Rule On... information collection requirements beyond those imposed by the existing Pb monitoring requirements. C...

  10. 34 CFR 104.22 - Existing facilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., delivery of health, welfare, or other social services at alternate accessible sites, alteration of existing... not required to make structural changes in existing facilities where other methods are effective in... handicapped persons in the most integrated setting appropriate. (c) Small health, welfare, or other social...

  11. Virus Particle Detection by Convolutional Neural Network in Transmission Electron Microscopy Images.

    PubMed

    Ito, Eisuke; Sato, Takaaki; Sano, Daisuke; Utagawa, Etsuko; Kato, Tsuyoshi

    2018-06-01

    A new computational method for the detection of virus particles in transmission electron microscopy (TEM) images is presented. Our approach is to use a convolutional neural network that transforms a TEM image to a probabilistic map that indicates where virus particles exist in the image. Our proposed approach automatically and simultaneously learns both discriminative features and classifier for virus particle detection by machine learning, in contrast to existing methods that are based on handcrafted features that yield many false positives and require several postprocessing steps. The detection performance of the proposed method was assessed against a dataset of TEM images containing feline calicivirus particles and compared with several existing detection methods, and the state-of-the-art performance of the developed method for detecting virus was demonstrated. Since our method is based on supervised learning that requires both the input images and their corresponding annotations, it is basically used for detection of already-known viruses. However, the method is highly flexible, and the convolutional networks can adapt themselves to any virus particles by learning automatically from an annotated dataset.

  12. New Internet search volume-based weighting method for integrating various environmental impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. Themore » resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.« less

  13. An automated and universal method for measuring mean grain size from a digital image of sediment

    USGS Publications Warehouse

    Buscombe, Daniel D.; Rubin, David M.; Warrick, Jonathan A.

    2010-01-01

    Existing methods for estimating mean grain size of sediment in an image require either complicated sequences of image processing (filtering, edge detection, segmentation, etc.) or statistical procedures involving calibration. We present a new approach which uses Fourier methods to calculate grain size directly from the image without requiring calibration. Based on analysis of over 450 images, we found the accuracy to be within approximately 16% across the full range from silt to pebbles. Accuracy is comparable to, or better than, existing digital methods. The new method, in conjunction with recent advances in technology for taking appropriate images of sediment in a range of natural environments, promises to revolutionize the logistics and speed at which grain-size data may be obtained from the field.

  14. Sociometric Indicators of Leadership: An Exploratory Analysis

    DTIC Science & Technology

    2018-01-01

    streamline existing observational protocols and assessment methods . This research provides an initial test of sociometric badges in the context of the U.S...understand, the requirements of the mission. Traditional research and assessment methods focusing on leader and follower interactions require direct...based methods of social network analysis. Novel Measures of Leadership Building on these findings and earlier research , it is apparent that

  15. Experiences Using Lightweight Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1997-01-01

    This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  16. 10 CFR 851.13 - Compliance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... with the requirements of this rule before February 9, 2007. (b) In the event a contractor has... methods to be added to the existing program, description, or process, that satisfy the requirements of...

  17. Innovating Method of Existing Mechanical Product Based on TRIZ Theory

    NASA Astrophysics Data System (ADS)

    Zhao, Cunyou; Shi, Dongyan; Wu, Han

    Main way of product development is adaptive design and variant design based on existing product. In this paper, conceptual design frame and its flow model of innovating products is put forward through combining the methods of conceptual design and TRIZ theory. Process system model of innovating design that includes requirement analysis, total function analysis and decomposing, engineering problem analysis, finding solution of engineering problem and primarily design is constructed and this establishes the base for innovating design of existing product.

  18. 40 CFR 98.54 - Monitoring and QA/QC requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... in paragraphs (b)(1) through (b)(3) of this section. (1) EPA Method 320, Measurement of Vapor Phase...) Direct measurement (such as using flow meters or weigh scales). (2) Existing plant procedures used for accounting purposes. (d) You must conduct all required performance tests according to the methods in § 98.54...

  19. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2016-12-06

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.

  20. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method

    PubMed Central

    Tuta, Jure; Juric, Matjaz B.

    2016-01-01

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments—some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models—free space path loss and ITU models—which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2–3 and 3–4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements. PMID:27929453

  1. Experiences Using Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1996-01-01

    This paper describes three cases studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, the formal modeling provided a cost effective enhancement of the existing verification and validation processes. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  2. Measuring cognition in teams: a cross-domain review.

    PubMed

    Wildman, Jessica L; Salas, Eduardo; Scott, Charles P R

    2014-08-01

    The purpose of this article is twofold: to provide a critical cross-domain evaluation of team cognition measurement options and to provide novice researchers with practical guidance when selecting a measurement method. A vast selection of measurement approaches exist for measuring team cognition constructs including team mental models, transactive memory systems, team situation awareness, strategic consensus, and cognitive processes. Empirical studies and theoretical articles were reviewed to identify all of the existing approaches for measuring team cognition. These approaches were evaluated based on theoretical perspective assumed, constructs studied, resources required, level of obtrusiveness, internal consistency reliability, and predictive validity. The evaluations suggest that all existing methods are viable options from the point of view of reliability and validity, and that there are potential opportunities for cross-domain use. For example, methods traditionally used only to measure mental models may be useful for examining transactive memory and situation awareness. The selection of team cognition measures requires researchers to answer several key questions regarding the theoretical nature of team cognition and the practical feasibility of each method. We provide novice researchers with guidance regarding how to begin the search for a team cognition measure and suggest several new ideas regarding future measurement research. We provide (1) a broad overview and evaluation of existing team cognition measurement methods, (2) suggestions for new uses of those methods across research domains, and (3) critical guidance for novice researchers looking to measure team cognition.

  3. A methodology for highway asset valuation in Indiana.

    DOT National Transportation Integrated Search

    2012-11-01

    The Government Accounting Standards Board (GASB) requires transportation agencies to report the values of their tangible assets. : Numerous valuation methods exist which use different underlying concepts and data items. These traditional methods have...

  4. Estimating Logistics Support of Reusable Launch Vehicles During Conceptual Design

    NASA Technical Reports Server (NTRS)

    Morris, W. D.; White, N. H.; Davies, W. T.; Ebeling, C. E.

    1997-01-01

    Methods exist to define the logistics support requirements for new aircraft concepts but are not directly applicable to new launch vehicle concepts. In order to define the support requirements and to discriminate among new technologies and processing choices for these systems, NASA Langley Research Center (LaRC) is developing new analysis methods. This paper describes several methods under development, gives their current status, and discusses the benefits and limitations associated with their use.

  5. Trial Sequential Methods for Meta-Analysis

    ERIC Educational Resources Information Center

    Kulinskaya, Elena; Wood, John

    2014-01-01

    Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…

  6. Kinematic Determination of an Unmodeled Serial Manipulator by Means of an IMU

    NASA Astrophysics Data System (ADS)

    Ciarleglio, Constance A.

    Kinematic determination for an unmodeled manipulator is usually done through a-priori knowledge of the manipulator physical characteristics or external sensor information. The mathematics of the kinematic estimation, often based on Denavit- Hartenberg convention, are complex and have high computation requirements, in addition to being unique to the manipulator for which the method is developed. Analytical methods that can compute kinematics on-the fly have the potential to be highly beneficial in dynamic environments where different configurations and variable manipulator types are often required. This thesis derives a new screw theory based method of kinematic determination, using a single inertial measurement unit (IMU), for use with any serial, revolute manipulator. The method allows the expansion of reconfigurable manipulator design and simplifies the kinematic process for existing manipulators. A simulation is presented where the theory of the method is verified and characterized with error. The method is then implemented on an existing manipulator as a verification of functionality.

  7. 47 CFR 1.1850 - Program accessibility: Existing facilities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... in such alteration or burdens must be made by the Managing Director, in consultation with the Section... not required to make structural changes in existing facilities where other methods are effective in... structural changes in facilities are undertaken, such changes shall be made within three (3) years of the...

  8. Quadrature, Interpolation and Observability

    NASA Technical Reports Server (NTRS)

    Hodges, Lucille McDaniel

    1997-01-01

    Methods of interpolation and quadrature have been used for over 300 years. Improvements in the techniques have been made by many, most notably by Gauss, whose technique applied to polynomials is referred to as Gaussian Quadrature. Stieltjes extended Gauss's method to certain non-polynomial functions as early as 1884. Conditions that guarantee the existence of quadrature formulas for certain collections of functions were studied by Tchebycheff, and his work was extended by others. Today, a class of functions which satisfies these conditions is called a Tchebycheff System. This thesis contains the definition of a Tchebycheff System, along with the theorems, proofs, and definitions necessary to guarantee the existence of quadrature formulas for such systems. Solutions of discretely observable linear control systems are of particular interest, and observability with respect to a given output function is defined. The output function is written as a linear combination of a collection of orthonormal functions. Orthonormal functions are defined, and their properties are discussed. The technique for evaluating the coefficients in the output function involves evaluating the definite integral of functions which can be shown to form a Tchebycheff system. Therefore, quadrature formulas for these integrals exist, and in many cases are known. The technique given is useful in cases where the method of direct calculation is unstable. The condition number of a matrix is defined and shown to be an indication of the the degree to which perturbations in data affect the accuracy of the solution. In special cases, the number of data points required for direct calculation is the same as the number required by the method presented in this thesis. But the method is shown to require more data points in other cases. A lower bound for the number of data points required is given.

  9. Polarization-based and specular-reflection-based noncontact latent fingerprint imaging and lifting

    NASA Astrophysics Data System (ADS)

    Lin, Shih-Schön; Yemelyanov, Konstantin M.; Pugh, Edward N., Jr.; Engheta, Nader

    2006-09-01

    In forensic science the finger marks left unintentionally by people at a crime scene are referred to as latent fingerprints. Most existing techniques to detect and lift latent fingerprints require application of a certain material directly onto the exhibit. The chemical and physical processing applied to the fingerprint potentially degrades or prevents further forensic testing on the same evidence sample. Many existing methods also have deleterious side effects. We introduce a method to detect and extract latent fingerprint images without applying any powder or chemicals on the object. Our method is based on the optical phenomena of polarization and specular reflection together with the physiology of fingerprint formation. The recovered image quality is comparable to existing methods. In some cases, such as the sticky side of tape, our method shows unique advantages.

  10. Earbuds: A Method for Analyzing Nasality in the Field

    ERIC Educational Resources Information Center

    Stewart, Jesse; Kohlberger, Martin

    2017-01-01

    Existing methods for collecting and analyzing nasality data are problematic for linguistic fieldworkers: aerodynamic equipment can be expensive and difficult to transport, and acoustic analyses require large amounts of optimally-recorded data. In this paper, a highly mobile and low-cost method is proposed. By connecting low impedance earbuds into…

  11. Self-adaptive method for high frequency multi-channel analysis of surface wave method

    USDA-ARS?s Scientific Manuscript database

    When the high frequency multi-channel analysis of surface waves (MASW) method is conducted to explore soil properties in the vadose zone, existing rules for selecting the near offset and spread lengths cannot satisfy the requirements of planar dominant Rayleigh waves for all frequencies of interest ...

  12. Defense Small Business Innovation Research Program (SBIR) FY 1984.

    DTIC Science & Technology

    1984-01-12

    nuclear submarine non-metallic, light weight, high strength piping . Includes the development of adequate fabrication procedures for attaching pipe ...waste heat economizer methods, require development. Improved conventional and hybrid heat pipes and/or two phase transport devices 149 IF are required...DESCRIPTION: A need exists to conceive, design, fabricate and test a method of adjusting the length of the individual legs of nylon or Kevlar rope sling

  13. An Extraction Method of an Informative DOM Node from a Web Page by Using Layout Information

    NASA Astrophysics Data System (ADS)

    Tsuruta, Masanobu; Masuyama, Shigeru

    We propose an informative DOM node extraction method from a Web page for preprocessing of Web content mining. Our proposed method LM uses layout data of DOM nodes generated by a generic Web browser, and the learning set consists of hundreds of Web pages and the annotations of informative DOM nodes of those Web pages. Our method does not require large scale crawling of the whole Web site to which the target Web page belongs. We design LM so that it uses the information of the learning set more efficiently in comparison to the existing method that uses the same learning set. By experiments, we evaluate the methods obtained by combining one that consists of the method for extracting the informative DOM node both the proposed method and the existing methods, and the existing noise elimination methods: Heur removes advertisements and link-lists by some heuristics and CE removes the DOM nodes existing in the Web pages in the same Web site to which the target Web page belongs. Experimental results show that 1) LM outperforms other methods for extracting the informative DOM node, 2) the combination method (LM, {CE(10), Heur}) based on LM (precision: 0.755, recall: 0.826, F-measure: 0.746) outperforms other combination methods.

  14. Making the transition to workload-based staffing: using the Workload Indicators of Staffing Need method in Uganda.

    PubMed

    Namaganda, Grace; Oketcho, Vincent; Maniple, Everd; Viadro, Claire

    2015-08-31

    Uganda's health workforce is characterized by shortages and inequitable distribution of qualified health workers. To ascertain staffing levels, Uganda uses fixed government-approved norms determined by facility type. This approach cannot distinguish between facilities of the same type that have different staffing needs. The Workload Indicators of Staffing Need (WISN) method uses workload to determine number and type of staff required in a given facility. The national WISN assessment sought to demonstrate the limitations of the existing norms and generate evidence to influence health unit staffing and staff deployment for efficient utilization of available scarce human resources. A national WISN assessment (September 2012) used purposive sampling to select 136 public health facilities in 33/112 districts. The study examined staffing requirements for five cadres (nursing assistants, nurses, midwives, clinical officers, doctors) at health centres II (n = 59), III (n = 53) and IV (n = 13) and hospitals (n = 11). Using health management information system workload data (1 July 2010-30 June 2011), the study compared current and required staff, assessed workload pressure and evaluated the adequacy of the existing staffing norms. By the WISN method, all three types of health centres had fewer nurses (42-70%) and midwives (53-67%) than required and consequently exhibited high workload pressure (30-58%) for those cadres. Health centres IV and hospitals lacked doctors (39-42%) but were adequately staffed with clinical officers. All facilities displayed overstaffing of nursing assistants. For all cadres at health centres III and IV other than nursing assistants, the fixed norms or existing staffing or both fell short of the WISN staffing requirements, with, for example, only half as many nurses and midwives as required. The WISN results demonstrate the inadequacies of existing staffing norms, particularly for health centres III and IV. The results provide an evidence base to reshape policy, adopt workload-based norms, review scopes of practice and target human resource investments. In the near term, the government could redistribute existing health workers to improve staffing equity in line with the WISN results. Longer term revision of staffing norms and investments to effectively reflect actual workloads and ensure provision of quality services at all levels is needed.

  15. Improving Deterministic Reserve Requirements for Security Constrained Unit Commitment and Scheduling Problems in Power Systems

    NASA Astrophysics Data System (ADS)

    Wang, Fengyu

    Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch structure, especially with the consideration of renewables, 2) to develop a market settlement scheme of proposed dynamic reserve policies such that the market efficiency is improved, 3) to evaluate the market impacts and price signal of the proposed dynamic reserve policies.

  16. Multimodal Image Registration through Simultaneous Segmentation.

    PubMed

    Aganj, Iman; Fischl, Bruce

    2017-11-01

    Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.

  17. University role in astronaut life support systems: Portable thermal control systems

    NASA Technical Reports Server (NTRS)

    Ephrath, A. R.

    1971-01-01

    One of the most vital life support systems is that used to provide the astronaut with an adequate thermal environment. State-of-the-art techniques are reviewed for collecting and rejecting excess heat loads from the suited astronaut. Emphasis is placed on problem areas which exist and which may be suitable topics for university research. Areas covered include thermal control requirements and restrictions, methods of heat absorption and rejection or storage, and comparison between existing methods and possible future techniques.

  18. Using artificial neural networks (ANN) for open-loop tomography

    NASA Astrophysics Data System (ADS)

    Osborn, James; De Cos Juez, Francisco Javier; Guzman, Dani; Butterley, Timothy; Myers, Richard; Guesalaga, Andres; Laine, Jesus

    2011-09-01

    The next generation of adaptive optics (AO) systems require tomographic techniques in order to correct for atmospheric turbulence along lines of sight separated from the guide stars. Multi-object adaptive optics (MOAO) is one such technique. Here, we present a method which uses an artificial neural network (ANN) to reconstruct the target phase given off-axis references sources. This method does not require any input of the turbulence profile and is therefore less susceptible to changing conditions than some existing methods. We compare our ANN method with a standard least squares type matrix multiplication method (MVM) in simulation and find that the tomographic error is similar to the MVM method. In changing conditions the tomographic error increases for MVM but remains constant with the ANN model and no large matrix inversions are required.

  19. Shuttle mission simulator hardware conceptual design report

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1973-01-01

    The detailed shuttle mission simulator hardware requirements are discussed. The conceptual design methods, or existing technology, whereby those requirements will be fulfilled are described. Information of a general nature on the total design problem plus specific details on how these requirements are to be satisfied are reported. The configuration of the simulator is described and the capabilities for various types of training are identified.

  20. Research on Computer Aided Innovation Model of Weapon Equipment Requirement Demonstration

    NASA Astrophysics Data System (ADS)

    Li, Yong; Guo, Qisheng; Wang, Rui; Li, Liang

    Firstly, in order to overcome the shortcoming of using only AD or TRIZ solely, and solve the problems currently existed in weapon equipment requirement demonstration, the paper construct the method system of weapon equipment requirement demonstration combining QFD, AD, TRIZ, FA. Then, we construct a CAI model frame of weapon equipment requirement demonstration, which include requirement decomposed model, requirement mapping model and requirement plan optimization model. Finally, we construct the computer aided innovation model of weapon equipment requirement demonstration, and developed CAI software of equipment requirement demonstration.

  1. Research on Capturing of Customer Requirements Based on Innovation Theory

    NASA Astrophysics Data System (ADS)

    junwu, Ding; dongtao, Yang; zhenqiang, Bao

    To exactly and effectively capture customer requirements information, a new customer requirements capturing modeling method was proposed. Based on the analysis of function requirement models of previous products and the application of technology system evolution laws of the Theory of Innovative Problem Solving (TRIZ), the customer requirements could be evolved from existing product designs, through modifying the functional requirement unit and confirming the direction of evolution design. Finally, a case study was provided to illustrate the feasibility of the proposed approach.

  2. Towards an Automated Acoustic Detection System for Free Ranging Elephants.

    PubMed

    Zeppelzauer, Matthias; Hensman, Sean; Stoeger, Angela S

    The human-elephant conflict is one of the most serious conservation problems in Asia and Africa today. The involuntary confrontation of humans and elephants claims the lives of many animals and humans every year. A promising approach to alleviate this conflict is the development of an acoustic early warning system. Such a system requires the robust automated detection of elephant vocalizations under unconstrained field conditions. Today, no system exists that fulfills these requirements. In this paper, we present a method for the automated detection of elephant vocalizations that is robust to the diverse noise sources present in the field. We evaluate the method on a dataset recorded under natural field conditions to simulate a real-world scenario. The proposed method outperformed existing approaches and robustly and accurately detected elephants. It thus can form the basis for a future automated early warning system for elephants. Furthermore, the method may be a useful tool for scientists in bioacoustics for the study of wildlife recordings.

  3. Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method

    NASA Astrophysics Data System (ADS)

    Mehl, S.

    2012-12-01

    Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.

  4. On a modified streamline curvature method for the Euler equations

    NASA Technical Reports Server (NTRS)

    Cordova, Jeffrey Q.; Pearson, Carl E.

    1988-01-01

    A modification of the streamline curvature method leads to a quasilinear second-order partial differential equation for the streamline coordinate function. The existence of a stream function is not required. The method is applied to subsonic and supersonic nozzle flow, and to axially symmetric flow with swirl. For many situations, the associated numerical method is both fast and accurate.

  5. Developing of method for primary frequency control droop and deadband actual values estimation

    NASA Astrophysics Data System (ADS)

    Nikiforov, A. A.; Chaplin, A. G.

    2017-11-01

    Operation of thermal power plant generation equipment, which participates in standardized primary frequency control (SPFC), must meet specific requirements. These requirements are formalized as nine algorithmic criteria, which are used for automatic monitoring of power plant participation in SPFC. One of these criteria - primary frequency control droop and deadband actual values estimation is considered in detail in this report. Experience shows that existing estimation method sometimes doesn’t work properly. Author offers alternative method, which allows estimating droop and deadband actual values more accurately. This method was implemented as a software application.

  6. International development of methods of analysis for the presence of products of modern biotechnology.

    PubMed

    Cantrill, Richard C

    2008-01-01

    Methods of analysis for products of modern biotechnology are required for national and international trade in seeds, grain and food in order to meet the labeling or import/export requirements of different nations and trading blocks. Although many methods were developed by the originators of transgenic events, governments, universities, and testing laboratories, trade is less complicated if there exists a set of international consensus-derived analytical standards. In any analytical situation, multiple methods may exist for testing for the same analyte. These methods may be supported by regional preferences and regulatory requirements. However, tests need to be sensitive enough to determine low levels of these traits in commodity grain for regulatory purposes and also to indicate purity of seeds containing these traits. The International Organization for Standardization (ISO) and its European counterpart have worked to produce a suite of standards through open, balanced and consensus-driven processes. Presently, these standards are approaching the time for their first review. In fact, ISO 21572, the "protein standard" has already been circulated for systematic review. In order to expedite the review and revision of the nucleic acid standards an ISO Technical Specification (ISO/TS 21098) was drafted to set the criteria for the inclusion of precision data from collaborative studies into the annexes of these standards.

  7. Extending enterprise architecture modelling with business goals and requirements

    NASA Astrophysics Data System (ADS)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten

    2011-02-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling techniques for EA focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of EAs in terms of stakeholder concerns and the high-level goals that address these concerns. This article describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the article illustrates how EA can benefit from analysis techniques from the requirements engineering domain.

  8. Enhancing to method for extracting Social network by the relation existence

    NASA Astrophysics Data System (ADS)

    Elfida, Maria; Matyuso Nasution, M. K.; Sitompul, O. S.

    2018-01-01

    To get the trusty information about the social network extracted from the Web requires a reliable method, but for optimal resultant required the method that can overcome the complexity of information resources. This paper intends to reveal ways to overcome the constraints of social network extraction leading to high complexity by identifying relationships among social actors. By changing the treatment of the procedure used, we obtain the complexity is smaller than the previous procedure. This has also been demonstrated in an experiment by using the denial sample.

  9. FY95 limited energy study for the area `a` package boiler. Holston Army Ammunition Plant, Kingsport, Tennessee. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-11-03

    Holston Army Ammunition Plant (HSAAP) in Holston, Tennessee, manufactures explosives from raw materials. The facility comprises two separate areas designated Area `A11 and Area 11B`. Each area is served by a steam plant which produces steam for production processes, equipment operation, space heating, domestic water heating, steam tracing, and product storage heating requirements. The purpose of this study is to identify and evaluate the technical and economic feasibility of alternative methods of meeting the steam requirements of the Area 11A11 industrial complex. The following items were specifically requested to be evaluated. Evaluate the use of two new gas-fired packaged boilersmore » sized to meet the requirements of the industrial complex. The new boilers would be installed adjacent to the existing steam plant and would utilize the existing smokestacks and steam distribution system. Evaluate using the existing steam distribution system rather than locating multiple boilers at various sites. Existing steam driven chillers will be replaced with electric driven equipment. Evaluate this impact on the steam system requirements. Field survey and test two existing gas-fired packaged boilers located at the Volunteer Army Ammunition Plant in Chattanooga, Tennessee. The two boilers were last used about 1980 and are presently laid away. The boilers are approximately the same capacity and operating characteristics as the ones at HSAAP. Relocation of the existing boilers and ancillary equipment (feedwater pumps, generators, fans, etc.) would be required as well as repairs or modifications necessary to meet current operating conditions and standards.« less

  10. An overview of data integration methods for regional assessment.

    PubMed

    Locantore, Nicholas W; Tran, Liem T; O'Neill, Robert V; McKinnis, Peter W; Smith, Elizabeth R; O'Connell, Michael

    2004-06-01

    The U.S. Environmental Protections Agency's (U.S. EPA) Regional Vulnerability Assessment(ReVA) program has focused much of its research over the last five years on developing and evaluating integration methods for spatial data. An initial strategic priority was to use existing data from monitoring programs, model results, and other spatial data. Because most of these data were not collected with an intention of integrating into a regional assessment of conditions and vulnerabilities, issues exist that may preclude the use of some methods or require some sort of data preparation. Additionally, to support multi-criteria decision-making, methods need to be able to address a series of assessment questions that provide insights into where environmental risks are a priority. This paper provides an overview of twelve spatial integration methods that can be applied towards regional assessment, along with preliminary results as to how sensitive each method is to data issues that will likely be encountered with the use of existing data.

  11. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  13. A Unified Approach to Modeling Multidisciplinary Interactions

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Bhatia, Kumar G.

    2000-01-01

    There are a number of existing methods to transfer information among various disciplines. For a multidisciplinary application with n disciplines, the traditional methods may be required to model (n(exp 2) - n) interactions. This paper presents a unified three-dimensional approach that reduces the number of interactions from (n(exp 2) - n) to 2n by using a computer-aided design model. The proposed modeling approach unifies the interactions among various disciplines. The approach is independent of specific discipline implementation, and a number of existing methods can be reformulated in the context of the proposed unified approach. This paper provides an overview of the proposed unified approach and reformulations for two existing methods. The unified approach is specially tailored for application environments where the geometry is created and managed through a computer-aided design system. Results are presented for a blended-wing body and a high-speed civil transport.

  14. Analyzing the security of an existing computer system

    NASA Technical Reports Server (NTRS)

    Bishop, M.

    1986-01-01

    Most work concerning secure computer systems has dealt with the design, verification, and implementation of provably secure computer systems, or has explored ways of making existing computer systems more secure. The problem of locating security holes in existing systems has received considerably less attention; methods generally rely on thought experiments as a critical step in the procedure. The difficulty is that such experiments require that a large amount of information be available in a format that makes correlating the details of various programs straightforward. This paper describes a method of providing such a basis for the thought experiment by writing a special manual for parts of the operating system, system programs, and library subroutines.

  15. Reinforcement learning for resource allocation in LEO satellite networks.

    PubMed

    Usaha, Wipawee; Barria, Javier A

    2007-06-01

    In this paper, we develop and assess online decision-making algorithms for call admission and routing for low Earth orbit (LEO) satellite networks. It has been shown in a recent paper that, in a LEO satellite system, a semi-Markov decision process formulation of the call admission and routing problem can achieve better performance in terms of an average revenue function than existing routing methods. However, the conventional dynamic programming (DP) numerical solution becomes prohibited as the problem size increases. In this paper, two solution methods based on reinforcement learning (RL) are proposed in order to circumvent the computational burden of DP. The first method is based on an actor-critic method with temporal-difference (TD) learning. The second method is based on a critic-only method, called optimistic TD learning. The algorithms enhance performance in terms of requirements in storage, computational complexity and computational time, and in terms of an overall long-term average revenue function that penalizes blocked calls. Numerical studies are carried out, and the results obtained show that the RL framework can achieve up to 56% higher average revenue over existing routing methods used in LEO satellite networks with reasonable storage and computational requirements.

  16. Accelerated Colorimetric Micro-assay for Screening Mold Inhibitors

    Treesearch

    Carol A. Clausen; Vina W. Yang

    2014-01-01

    Rapid quantitative laboratory test methods are needed to screen potential antifungal agents. Existing laboratory test methods are relatively time consuming, may require specialized test equipment and rely on subjective visual ratings. A quantitative, colorimetric micro-assay has been developed that uses XTT tetrazolium salt to metabolically assess mold spore...

  17. GBU-X bounding requirements for highly flexible munitions

    NASA Astrophysics Data System (ADS)

    Bagby, Patrick T.; Shaver, Jonathan; White, Reed; Cafarelli, Sergio; Hébert, Anthony J.

    2017-04-01

    This paper will present the results of an investigation into requirements for existing software and hardware solutions for open digital communication architectures that support weapon subsystem integration. The underlying requirements of such a communication architecture would be to achieve the lowest latency possible at a reasonable cost point with respect to the mission objective of the weapon. The determination of the latency requirements of the open architecture software and hardware were derived through the use of control system and stability margins analyses. Studies were performed on the throughput and latency of different existing communication transport methods. The two architectures that were tested in this study include Data Distribution Service (DDS) and Modular Open Network Architecture (MONARCH). This paper defines what levels of latency can be achieved with current technology and how this capability may translate to future weapons. The requirements moving forward within communications solutions are discussed.

  18. TRANSAT-- method for detecting the conserved helices of functional RNA structures, including transient, pseudo-knotted and alternative structures.

    PubMed

    Wiebe, Nicholas J P; Meyer, Irmtraud M

    2010-06-24

    The prediction of functional RNA structures has attracted increased interest, as it allows us to study the potential functional roles of many genes. RNA structure prediction methods, however, assume that there is a unique functional RNA structure and also do not predict functional features required for in vivo folding. In order to understand how functional RNA structures form in vivo, we require sophisticated experiments or reliable prediction methods. So far, there exist only a few, experimentally validated transient RNA structures. On the computational side, there exist several computer programs which aim to predict the co-transcriptional folding pathway in vivo, but these make a range of simplifying assumptions and do not capture all features known to influence RNA folding in vivo. We want to investigate if evolutionarily related RNA genes fold in a similar way in vivo. To this end, we have developed a new computational method, Transat, which detects conserved helices of high statistical significance. We introduce the method, present a comprehensive performance evaluation and show that Transat is able to predict the structural features of known reference structures including pseudo-knotted ones as well as those of known alternative structural configurations. Transat can also identify unstructured sub-sequences bound by other molecules and provides evidence for new helices which may define folding pathways, supporting the notion that homologous RNA sequence not only assume a similar reference RNA structure, but also fold similarly. Finally, we show that the structural features predicted by Transat differ from those assuming thermodynamic equilibrium. Unlike the existing methods for predicting folding pathways, our method works in a comparative way. This has the disadvantage of not being able to predict features as function of time, but has the considerable advantage of highlighting conserved features and of not requiring a detailed knowledge of the cellular environment.

  19. Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors

    PubMed Central

    Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech

    2011-01-01

    Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935

  20. Improving Upon String Methods for Transition State Discovery.

    PubMed

    Chaffey-Millar, Hugh; Nikodem, Astrid; Matveev, Alexei V; Krüger, Sven; Rösch, Notker

    2012-02-14

    Transition state discovery via application of string methods has been researched on two fronts. The first front involves development of a new string method, named the Searching String method, while the second one aims at estimating transition states from a discretized reaction path. The Searching String method has been benchmarked against a number of previously existing string methods and the Nudged Elastic Band method. The developed methods have led to a reduction in the number of gradient calls required to optimize a transition state, as compared to existing methods. The Searching String method reported here places new beads on a reaction pathway at the midpoint between existing beads, such that the resolution of the path discretization in the region containing the transition state grows exponentially with the number of beads. This approach leads to favorable convergence behavior and generates more accurate estimates of transition states from which convergence to the final transition states occurs more readily. Several techniques for generating improved estimates of transition states from a converged string or nudged elastic band have been developed and benchmarked on 13 chemical test cases. Optimization approaches for string methods, and pitfalls therein, are discussed.

  1. Development and Application of a Soil Moisture Downscaling Method for Mobility Assessment

    DTIC Science & Technology

    2011-05-01

    instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...REPORT Development and Application of a Soil Moisture Downscaling Method for Mobility Assessment 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: Soil...cells). Thus, a method is required to downscale intermediate-resolution patterns to finer resolutions. Fortunately, fine-resolution variations in

  2. 36 CFR 406.150 - Program accessibility: Existing facilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accessibility requirements to the extent compelled by the Architectural Barriers Act of 1968, as amended (42 U.S.C. 4151-4157), and any regulations implementing it. In choosing among available methods for meeting...

  3. 11 CFR 9420.5 - Program accessibility: Existing facilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the extent compelled by the Architectural Barriers Act of 1968, as amended, 42 U.S.C. 4151-4157, and any regulations implementing it. In choosing among available methods for meeting the requirements of...

  4. 11 CFR 6.150 - Program accessibility; Existing facilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the extent compelled by the Architectural Barriers Act of 1968, as amended (42 U.S.C. 4151-4157) and any regulations implementing it. In choosing among available methods for meeting the requirements of...

  5. 13 CFR 136.150 - Program accessibility: Existing facilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the extent compelled by the Architectural Barriers Act of 1968, as amended (42 U.S.C. 4151-4157), and any regulations implementing it. In choosing among available methods for meeting the requirements of...

  6. 40 CFR 98.264 - Monitoring and QA/QC requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-process phosphoric acid process line. You can use existing plant procedures that are used for accounting... the process line. Conduct the representative bulk sampling using the applicable standard method in the...

  7. A Novel Domain Assembly Routine for Creating Full-Length Models of Membrane Proteins from Known Domain Structures.

    PubMed

    Koehler Leman, Julia; Bonneau, Richard

    2018-04-03

    Membrane proteins composed of soluble and membrane domains are often studied one domain at a time. However, to understand the biological function of entire protein systems and their interactions with each other and drugs, knowledge of full-length structures or models is required. Although few computational methods exist that could potentially be used to model full-length constructs of membrane proteins, none of these methods are perfectly suited for the problem at hand. Existing methods require an interface or knowledge of the relative orientations of the domains or are not designed for domain assembly, and none of them are developed for membrane proteins. Here we describe the first domain assembly protocol specifically designed for membrane proteins that assembles intra- and extracellular soluble domains and the transmembrane domain into models of the full-length membrane protein. Our protocol does not require an interface between the domains and samples possible domain orientations based on backbone dihedrals in the flexible linker regions, created via fragment insertion, while keeping the transmembrane domain fixed in the membrane. For five examples tested, our method mp_domain_assembly, implemented in RosettaMP, samples domain orientations close to the known structure and is best used in conjunction with experimental data to reduce the conformational search space.

  8. Survey of existing performance requirements in codes and standards for light-frame construction

    Treesearch

    G. E. Sherwood

    1980-01-01

    Present building codes and standards are a combination of specifications and performance criteria. Where specifications prevail, the introduction f new materials or methods can be a long, cumbersome process. To facilitate the introduction of new technology, performance requirements are becoming more prevalent. In some areas, there is a lack of information on which to...

  9. Partitioning Strategy Using Static Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Seo, Yongjin; Soo Kim, Hyeon

    2016-08-01

    Flight software is software used in satellites' on-board computers. It has requirements such as real time and reliability. The IMA architecture is used to satisfy these requirements. The IMA architecture has the concept of partitions and this affected the configuration of flight software. That is, situations occurred in which software that had been loaded on one system was divided into many partitions when being loaded. For new issues, existing studies use experience based partitioning methods. However, these methods have a problem that they cannot be reused. In this respect, this paper proposes a partitioning method that is reusable and consistent.

  10. Some Findings Concerning Requirements in Agile Methodologies

    NASA Astrophysics Data System (ADS)

    Rodríguez, Pilar; Yagüe, Agustín; Alarcón, Pedro P.; Garbajosa, Juan

    Agile methods have appeared as an attractive alternative to conventional methodologies. These methods try to reduce the time to market and, indirectly, the cost of the product through flexible development and deep customer involvement. The processes related to requirements have been extensively studied in literature, in most cases in the frame of conventional methods. However, conclusions of conventional methodologies could not be necessarily valid for Agile; in some issues, conventional and Agile processes are radically different. As recent surveys report, inadequate project requirements is one of the most conflictive issues in agile approaches and better understanding about this is needed. This paper describes some findings concerning requirements activities in a project developed under an agile methodology. The project intended to evolve an existing product and, therefore, some background information was available. The major difficulties encountered were related to non-functional needs and management of requirements dependencies.

  11. A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.

  12. Efficient Bayesian mixed model analysis increases association power in large cohorts

    PubMed Central

    Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L

    2014-01-01

    Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633

  13. Disinfection of Cystoscopes by Subatmospheric Steam and Steam and Formaldehyde at 80°C

    PubMed Central

    Alder, V. G.; Gingell, J. C.; Mitchell, J. P.

    1971-01-01

    A new method of disinfection adapted for endoscopic instruments uses low temperature steam at 80°C or steam and formaldehyde at 80°C. The process has considerable advantages over existing methods and more closely approaches the ideal requirements. ImagesFIG. 3FIG. 4FIG. 5 PMID:5569551

  14. Niépce-Bell or Turing: how to test odour reproduction.

    PubMed

    Harel, David

    2016-12-01

    Decades before the existence of anything resembling an artificial intelligence system, Alan Turing raised the question of how to test whether machines can think, or, in modern terminology, whether a computer claimed to exhibit intelligence indeed does so. This paper raises the analogous issue for olfaction: how to test the validity of a system claimed to reproduce arbitrary odours artificially, in a way recognizable to humans. Although odour reproduction systems are still far from being viable, the question of how to test candidates thereof is claimed to be interesting and non-trivial, and a novel method is proposed. Despite the similarity between the two questions and their surfacing long before the tested systems exist, the present question cannot be answered adequately by a Turing-like method. Instead, our test is very different: it is conditional, requiring from the artificial no more than is required from the original, and it employs a novel method of immersion that takes advantage of the availability of easily recognizable reproduction methods for sight and sound, a la Nicéphore Niépce and Alexander Graham Bell. © 2016 The Authors.

  15. Niépce–Bell or Turing: how to test odour reproduction

    PubMed Central

    2016-01-01

    Decades before the existence of anything resembling an artificial intelligence system, Alan Turing raised the question of how to test whether machines can think, or, in modern terminology, whether a computer claimed to exhibit intelligence indeed does so. This paper raises the analogous issue for olfaction: how to test the validity of a system claimed to reproduce arbitrary odours artificially, in a way recognizable to humans. Although odour reproduction systems are still far from being viable, the question of how to test candidates thereof is claimed to be interesting and non-trivial, and a novel method is proposed. Despite the similarity between the two questions and their surfacing long before the tested systems exist, the present question cannot be answered adequately by a Turing-like method. Instead, our test is very different: it is conditional, requiring from the artificial no more than is required from the original, and it employs a novel method of immersion that takes advantage of the availability of easily recognizable reproduction methods for sight and sound, a la Nicéphore Niépce and Alexander Graham Bell. PMID:28003527

  16. An Improved Aerial Target Localization Method with a Single Vector Sensor

    PubMed Central

    Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin

    2017-01-01

    This paper focuses on the problems encountered in the actual data processing with the use of the existing aerial target localization methods, analyzes the causes of the problems, and proposes an improved algorithm. Through the processing of the sea experiment data, it is found that the existing algorithms have higher requirements for the accuracy of the angle estimation. The improved algorithm reduces the requirements of the angle estimation accuracy and obtains the robust estimation results. The closest distance matching estimation algorithm and the horizontal distance estimation compensation algorithm are proposed. The smoothing effect of the data after being post-processed by using the forward and backward two-direction double-filtering method has been improved, thus the initial stage data can be filtered, so that the filtering results retain more useful information. In this paper, the aerial target height measurement methods are studied, the estimation results of the aerial target are given, so as to realize the three-dimensional localization of the aerial target and increase the understanding of the underwater platform to the aerial target, so that the underwater platform has better mobility and concealment. PMID:29135956

  17. A hybrid voice/data modulation for the VHF aeronautical channels

    NASA Technical Reports Server (NTRS)

    Akos, Dennis M.

    1993-01-01

    A method of improving the spectral efficiency of the existing Very High Frequency (VHF) Amplitude Modulation (AM) voice communication channels is proposed. The technique is to phase modulate the existing voice amplitude modulated carrier with digital data. This allows the transmission of digital information over an existing AM voice channel with no change to the existing AM signal format. There is no modification to the existing AM receiver to demodulate the voice signal and an additional receiver module can be added for processing of the digital data. The existing VHF AM transmitter requires only a slight modification for the addition of the digital data signal. The past work in the area is summarized and presented together with an improved system design and the proposed implementation.

  18. StakeMeter: value-based stakeholder identification and quantification framework for value-based software systems.

    PubMed

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N A; Bin Zaheer, Kashif

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called 'StakeMeter'. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.

  19. Investigation of aged hot-mix asphalt pavements.

    DOT National Transportation Integrated Search

    2013-09-01

    Over the lifetime of an asphalt concrete (AC) pavement, the roadway requires periodic resurfacing and rehabilitation to provide : acceptable performance. The most popular resurfacing method is an asphalt overlay over the existing roadway. In the desi...

  20. Stormwater BMP Effectiveness Assessment Toolkit

    EPA Science Inventory

    US EPA has identified stormwater BMP effectiveness as a priority research need. Effective protection of biotic integrity requires that processes maintaining the diversity of physical habitats be protected. Methods are needed to evaluate the effectiveness of existing Stormwater ...

  1. 15 CFR 8c.50 - Program accessibility: Existing facilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... accessibility requirements to the extent compelled by the Architectural Barriers Act of 1968, as amended (42 U.S.C. 4151-4157), and any regulations implementing it. In choosing among available methods for meeting...

  2. A basic guide to overlay design using nondestructive testing equipment data

    NASA Astrophysics Data System (ADS)

    Turner, Vernon R.

    1990-08-01

    The purpose of this paper is to provide a basic and concise guide to designing asphalt concrete (AC) overlays over existing AC pavements. The basis for these designs is deflection data obtained from nondestructive testing (NDT) equipment. This data is used in design procedures which produce required overlay thickness or an estimate of remaining pavement life. This guide enables one to design overlays or better monitor the designs being performed by others. This paper will discuss three types of NDT equipment, the Asphalt Institute Overlay Designs by Deflection Analysis and by the effective thickness method as well as a method of estimating remaining pavement life, correlations between NDT equipment and recent correlations in Washington State. Asphalt overlays provide one of the most cost effective methods of improving existing pavements. Asphalt overlays can be used to strengthen existing pavements, to reduce maintenance costs, to increase pavement life, to provide a smoother ride, and to improve skid resistance.

  3. Wavenumber selection in Benard convection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Catton, I.

    1988-11-01

    The results of three related studies dealing with wavenumber selection in Rayleigh--Benard convection are reported. The first, an extension of the power integral method, is used to argue for the existence of multi-wavenumbers at all supercritical wavenumbers. Most existing closure schemes are shown to be inadequate. A thermodynamic stability criterion is shown to give reasonable results but requires empirical measurement of one parameter for closure. The third study uses an asymptotic approach based in part on geometric considerations and requires no empiricism to obtain good predictions of the wavenumber. These predictions, however, can only be used for certain planforms ofmore » convection.« less

  4. Neurology education: current and emerging concepts in residency and fellowship training.

    PubMed

    Stern, Barney J; Józefowicz, Ralph F; Kissela, Brett; Lewis, Steven L

    2010-05-01

    This article discusses the current and future state of neurology training. A priority is to attract sufficient numbers of qualified candidates for the existing residency programs. A majority of neurology residents elects additional training in a neurologic subspecialty, and programs will have to be accredited accordingly. Attempts are being made to standardize and strengthen the existing general residency and subspecialty programs through cooperative efforts. Ultimately, residency programs must comply with the increasing requirements and try to adapt these requirements to the unique demands and realities of neurology training. An effort is underway to establish consistent competency-testing methods. Copyright 2010 Elsevier Inc. All rights reserved.

  5. 78 FR 8066 - Method for the Determination of Lead in Total Suspended Particulate Matter

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-05

    ... the Paperwork Reduction Act, 44 U.S.C. 3501 et seq. Burden is defined at 5 CFR 1320.3(b). The proposed... information collection requirements beyond those imposed by the existing Pb monitoring requirements. C... mandates under the provisions of Title II of the Unfunded Mandates Reform Act of 1995 (UMRA), 2 U.S.C. 1531...

  6. High frequency-heated air turbojet

    NASA Technical Reports Server (NTRS)

    Miron, J. H. D.

    1986-01-01

    A description is given of a method to heat air coming from a turbojet compressor to a temperature necessary to produce required expansion without requiring fuel. This is done by high frequency heating, which heats the walls corresponding to the combustion chamber in existing jets, by mounting high frequency coils in them. The current transformer and high frequency generator to be used are discussed.

  7. Investigation of aged hot-mix asphalt pavements : technical summary.

    DOT National Transportation Integrated Search

    2013-09-01

    Over the lifetime of an asphalt concrete (AC) pavement, the roadway requires periodic resurfacing and rehabilitation to provide acceptable performance. The most popular resurfacing method is an asphalt overlay over the existing roadway. In the design...

  8. Research notes : retrofitting culverts for fish.

    DOT National Transportation Integrated Search

    2005-01-01

    Culverts are a well established method to pass a roadway over a waterway. Standard design criteria exist for meeting the hydraulic requirements for moving the water through the culverts. However, the hydraulic conditions resulting from many culvert d...

  9. 20 CFR 365.150 - Program accessibility: Existing facilities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... compelled by the Architectural Barriers Act of 1968, as amended (42 U.S.C. 4151-4157), and any regulations implementing it are met. In choosing among available methods for meeting the requirements of this section, the...

  10. Building a Library Web Server on a Budget.

    ERIC Educational Resources Information Center

    Orr, Giles

    1998-01-01

    Presents a method for libraries with limited budgets to create reliable Web servers with existing hardware and free software available via the Internet. Discusses staff, hardware and software requirements, and security; outlines the assembly process. (PEN)

  11. Structural Optimization of a Knuckle with Consideration of Stiffness and Durability Requirements

    PubMed Central

    Kim, Geun-Yeon

    2014-01-01

    The automobile's knuckle is connected to the parts of the steering system and the suspension system and it is used for adjusting the direction of a rotation through its attachment to the wheel. This study changes the existing material made of GCD45 to Al6082M and recommends the lightweight design of the knuckle as the optimal design technique to be installed in small cars. Six shape design variables were selected for the optimization of the knuckle and the criteria relevant to stiffness and durability were considered as the design requirements during the optimization process. The metamodel-based optimization method that uses the kriging interpolation method as the optimization technique was applied. The result shows that all constraints for stiffness and durability are satisfied using A16082M, while reducing the weight of the knuckle by 60% compared to that of the existing GCD450. PMID:24995359

  12. Regulatory experience in applying a radiological environmental protection framework for existing and planned nuclear facilities.

    PubMed

    Mihok, S; Thompson, P

    2012-01-01

    Frameworks and methods for the radiological protection of non-human biota have been evolving rapidly at the International Commission on Radiological Protection and through various European initiatives. The International Atomic Energy Agency has incorporated a requirement for environmental protection in the latest revision of its Basic Safety Standards. In Canada, the Canadian Nuclear Safety Commission has been legally obligated to prevent unreasonable risk to the environment since 2000. Licensees have therefore been meeting generic legal requirements to demonstrate adequate control of releases of radioactive substances for the protection of both people and biota for many years. In the USA, in addition to the generic requirements of the Environmental Protection Agency and the Nuclear Regulatory Commission, Department of Energy facilities have also had to comply with specific dose limits after a standard assessment methodology was finalised in 2002. Canadian regulators developed a similar framework for biota dose assessment through a regulatory assessment under the Canadian Environmental Protection Act in the late 1990s. Since then, this framework has been applied extensively to satisfy legal requirements under the Canadian Environmental Assessment Act and the Nuclear Safety and Control Act. After approximately a decade of experience in applying these methods, it is clear that simple methods are fit for purpose, and can be used for making regulatory decisions for existing and planned nuclear facilities. Copyright © 2012. Published by Elsevier Ltd.

  13. HGML: a hypertext guideline markup language.

    PubMed Central

    Hagerty, C. G.; Pickens, D.; Kulikowski, C.; Sonnenberg, F.

    2000-01-01

    Existing text-based clinical practice guidelines can be difficult to put into practice. While a growing number of such documents have gained acceptance in the medical community and contain a wealth of valuable information, the time required to digest them is substantial. Yet the expressive power, subtlety and flexibility of natural language pose challenges when designing computer tools that will help in their application. At the same time, formal computer languages typically lack such expressiveness and the effort required to translate existing documents into these languages may be costly. We propose a method based on the mark-up concept for converting text-based clinical guidelines into a machine-operable form. This allows existing guidelines to be manipulated by machine, and viewed in different formats at various levels of detail according to the needs of the practitioner, while preserving their originally published form. PMID:11079898

  14. NOTE: Solving the ECG forward problem by means of a meshless finite element method

    NASA Astrophysics Data System (ADS)

    Li, Z. S.; Zhu, S. A.; He, Bin

    2007-07-01

    The conventional numerical computational techniques such as the finite element method (FEM) and the boundary element method (BEM) require laborious and time-consuming model meshing. The new meshless FEM only uses the boundary description and the node distribution and no meshing of the model is required. This paper presents the fundamentals and implementation of meshless FEM and the meshless FEM method is adapted to solve the electrocardiography (ECG) forward problem. The method is evaluated on a single-layer torso model, in which the analytical solution exists, and tested in a realistic geometry homogeneous torso model, with satisfactory results being obtained. The present results suggest that the meshless FEM may provide an alternative for ECG forward solutions.

  15. 77 FR 33811 - National Emission Standards for Hazardous Air Pollutants for Reciprocating Internal Combustion...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-07

    ... reduction requirement, the method used must be suitable for the entire range of emissions since pre and post... demand response. The proposed amendments also correct minor mistakes in the pre-existing regulations...: Submit your comments, identified by Docket ID No. EPA-HQ- OAR-2008-0708, by one of the following methods...

  16. Reducing Sensor Noise in MEG and EEG Recordings Using Oversampled Temporal Projection.

    PubMed

    Larson, Eric; Taulu, Samu

    2018-05-01

    Here, we review the theory of suppression of spatially uncorrelated, sensor-specific noise in electro- and magentoencephalography (EEG and MEG) arrays, and introduce a novel method for suppression. Our method requires only that the signals of interest are spatially oversampled, which is a reasonable assumption for many EEG and MEG systems. Our method is based on a leave-one-out procedure using overlapping temporal windows in a mathematical framework to project spatially uncorrelated noise in the temporal domain. This method, termed "oversampled temporal projection" (OTP), has four advantages over existing methods. First, sparse channel-specific artifacts are suppressed while limiting mixing with other channels, whereas existing linear, time-invariant spatial operators can spread such artifacts to other channels with a spatial distribution which can be mistaken for one produced by an electrophysiological source. Second, OTP minimizes distortion of the spatial configuration of the data. During source localization (e.g., dipole fitting), many spatial methods require corresponding modification of the forward model to avoid bias, while OTP does not. Third, noise suppression factors at the sensor level are maintained during source localization, whereas bias compensation removes the denoising benefit for spatial methods that require such compensation. Fourth, OTP uses a time-window duration parameter to control the tradeoff between noise suppression and adaptation to time-varying sensor characteristics. OTP efficiently optimizes noise suppression performance while controlling for spatial bias of the signal of interest. This is important in applications where sensor noise significantly limits the signal-to-noise ratio, such as high-frequency brain oscillations.

  17. The European space debris safety and mitigation standard

    NASA Astrophysics Data System (ADS)

    Alby, F.; Alwes, D.; Anselmo, L.; Baccini, H.; Bonnal, C.; Crowther, R.; Flury, W.; Jehn, R.; Klinkrad, H.; Portelli, C.; Tremayne-Smith, R.

    2001-10-01

    A standard has been proposed as one of the series of ECSS Standards intended to be applied together for the management, engineering and product assurance in space projects and applications. The requirements in the Standard are defined in terms of what must be accomplished, rather than in terms of how to organise and perform the necessary work. This allows existing organisational structures and methods within agencies and industry to be applied where they are effective, and for such structures and methods to evolve as necessary, without the need for rewriting the standards. The Standard comprises management requirements, design requirements and operational requirements. The standard was prepared by the European Debris Mitigation Standard Working Group (EDMSWG) involving members from ASI, BNSC, CNES, DLR and ESA.

  18. Single point estimation of phenytoin dosing: a reappraisal.

    PubMed

    Koup, J R; Gibaldi, M; Godolphin, W

    1981-11-01

    A previously proposed method for estimation of phenytoin dosing requirement using a single serum sample obtained 24 hours after intravenous loading dose (18 mg/Kg) has been re-evaluated. Using more realistic values for the volume of distribution of phenytoin (0.4 to 1.2 L/Kg), simulations indicate that the proposed method will fail to consistently predict dosage requirements. Additional simulations indicate that two samples obtained during the 24 hour interval following the iv loading dose could be used to more reliably predict phenytoin dose requirement. Because of the nonlinear relationship which exists between phenytoin dose administration rate (RO) and the mean steady state serum concentration (CSS), small errors in prediction of the required RO result in much larger errors in CSS.

  19. Unconstrained and contactless hand geometry biometrics.

    PubMed

    de-Santos-Sierra, Alberto; Sánchez-Ávila, Carmen; Del Pozo, Gonzalo Bailador; Guerra-Casanova, Javier

    2011-01-01

    This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely support vector machines (SVM) and k-nearest neighbour (k-NN). Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices.

  20. Unconstrained and Contactless Hand Geometry Biometrics

    PubMed Central

    de-Santos-Sierra, Alberto; Sánchez-Ávila, Carmen; del Pozo, Gonzalo Bailador; Guerra-Casanova, Javier

    2011-01-01

    This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely Support Vector Machines (SVM) and k-Nearest Neighbour (k-NN). Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices. PMID:22346634

  1. Identifying Stakeholders and Their Preferences about NFR by Comparing Use Case Diagrams of Several Existing Systems

    NASA Astrophysics Data System (ADS)

    Kaiya, Haruhiko; Osada, Akira; Kaijiri, Kenji

    We present a method to identify stakeholders and their preferences about non-functional requirements (NFR) by using use case diagrams of existing systems. We focus on the changes about NFR because such changes help stakeholders to identify their preferences. Comparing different use case diagrams of the same domain helps us to find changes to be occurred. We utilize Goal-Question-Metrics (GQM) method for identifying variables that characterize NFR, and we can systematically represent changes about NFR using the variables. Use cases that represent system interactions help us to bridge the gap between goals and metrics (variables), and we can easily construct measurable NFR. For validating and evaluating our method, we applied our method to an application domain of Mail User Agent (MUA) system.

  2. Synchronization of Chaotic Systems without Direct Connections Using Reinforcement Learning

    NASA Astrophysics Data System (ADS)

    Sato, Norihisa; Adachi, Masaharu

    In this paper, we propose a control method for the synchronization of chaotic systems that does not require the systems to be connected, unlike existing methods such as that proposed by Pecora and Carroll in 1990. The method is based on the reinforcement learning algorithm. We apply our method to two discrete-time chaotic systems with mismatched parameters and achieve M step delay synchronization. Moreover, we extend the proposed method to the synchronization of continuous-time chaotic systems.

  3. Low Base-Substitution Mutation Rate in the Germline Genome of the Ciliate Tetrahymena thermophila

    DTIC Science & Technology

    2016-09-15

    generations of mutation accumulation (MA). We applied an existing mutation-calling pipeline and developed a new probabilistic mutation detection approach...noise introduced by mismapped reads. We used both our new method and an existing mutation-calling pipeline (Sung, Tucker, et al. 2012) to analyse the...and larger MA experiments will be required to confidently estimate the mutational spectrum of a species with such a low mutation rate. Materials and

  4. StakeMeter: Value-Based Stakeholder Identification and Quantification Framework for Value-Based Software Systems

    PubMed Central

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N. A.; Zaheer, Kashif Bin

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called ‘StakeMeter’. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error. PMID:25799490

  5. Using Optically Stimulated Electron Emission as an Inspection Method to Monitor Surface Contamination

    NASA Technical Reports Server (NTRS)

    Lingbloom, Mike S.

    2008-01-01

    During redesign of the Space Shuttle reusable solid rocket motor (RSRM), NASA amended the contract with ATK Launch Systems (then Morton Thiokol Inc.) with Change Order 966 to implement a contamination control and cleanliness verification method. The change order required: (1) A quantitative inspection method (2) A written record of actual contamination levels versus a known reject level (3) A method that is more sensitive than existing methods of visual and black light inspection. Black light inspection is only useful for inspection of contaminants that fluoresce near the 365 nm spectral line and is not useful for inspection of most silicones that will not produce strong fluorescence. Black light inspection conducted by a qualified inspector under controlled light is capable of detecting Conoco HD-2 grease in gross amounts and is very subjective due to operator sensitivity. Optically stimulated electron emission (OSEE), developed at the Materials and Process Laboratory at Marshall Space Flight Center (MSFC), was selected to satisfy Change Order 966. OSEE offers several important advantages over existing laboratory methods with similar sensitivity, e.g., spectroscopy and nonvolatile residue sampling, which provide turn around time, real time capability, and full coverage inspection capability. Laboratory methods require sample gathering and in-lab analysis, which sometimes takes several days to get results. This is not practical in a production environment. In addition, these methods do not offer full coverage inspection of the large components

  6. Variable Structure PID Control to Prevent Integrator Windup

    NASA Technical Reports Server (NTRS)

    Hall, C. E.; Hodel, A. S.; Hung, J. Y.

    1999-01-01

    PID controllers are frequently used to control systems requiring zero steady-state error while maintaining requirements for settling time and robustness (gain/phase margins). PID controllers suffer significant loss of performance due to short-term integrator wind-up when used in systems with actuator saturation. We examine several existing and proposed methods for the prevention of integrator wind-up in both continuous and discrete time implementations.

  7. Measurement of CO2 diffusivity for carbon sequestration: a microfluidic approach for reservoir-specific analysis.

    PubMed

    Sell, Andrew; Fadaei, Hossein; Kim, Myeongsub; Sinton, David

    2013-01-02

    Predicting carbon dioxide (CO(2)) security and capacity in sequestration requires knowledge of CO(2) diffusion into reservoir fluids. In this paper we demonstrate a microfluidic based approach to measuring the mutual diffusion coefficient of carbon dioxide in water and brine. The approach enables formation of fresh CO(2)-liquid interfaces; the resulting diffusion is quantified by imaging fluorescence quenching of a pH-dependent dye, and subsequent analyses. This method was applied to study the effects of site-specific variables--CO(2) pressure and salinity levels--on the diffusion coefficient. In contrast to established, macro-scale pressure-volume-temperature cell methods that require large sample volumes and testing periods of hours/days, this approach requires only microliters of sample, provides results within minutes, and isolates diffusive mass transport from convective effects. The measured diffusion coefficient of CO(2) in water was constant (1.86 [± 0.26] × 10(-9) m(2)/s) over the range of pressures (5-50 bar) tested at 26 °C, in agreement with existing models. The effects of salinity were measured with solutions of 0-5 M NaCl, where the diffusion coefficient varied up to 3 times. These experimental data support existing theory and demonstrate the applicability of this method for reservoir-specific testing.

  8. Linear Regression with a Randomly Censored Covariate: Application to an Alzheimer's Study.

    PubMed

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2017-01-01

    The association between maternal age of onset of dementia and amyloid deposition (measured by in vivo positron emission tomography (PET) imaging) in cognitively normal older offspring is of interest. In a regression model for amyloid, special methods are required due to the random right censoring of the covariate of maternal age of onset of dementia. Prior literature has proposed methods to address the problem of censoring due to assay limit of detection, but not random censoring. We propose imputation methods and a survival regression method that do not require parametric assumptions about the distribution of the censored covariate. Existing imputation methods address missing covariates, but not right censored covariates. In simulation studies, we compare these methods to the simple, but inefficient complete case analysis, and to thresholding approaches. We apply the methods to the Alzheimer's study.

  9. Validation studies and proficiency testing.

    PubMed

    Ankilam, Elke; Heinze, Petra; Kay, Simon; Van den Eede, Guy; Popping, Bert

    2002-01-01

    Genetically modified organisms (GMOs) entered the European food market in 1996. Current legislation demands the labeling of food products if they contain <1% GMO, as assessed for each ingredient of the product. To create confidence in the testing methods and to complement enforcement requirements, there is an urgent need for internationally validated methods, which could serve as reference methods. To date, several methods have been submitted to validation trials at an international level; approaches now exist that can be used in different circumstances and for different food matrixes. Moreover, the requirement for the formal validation of methods is clearly accepted; several national and international bodies are active in organizing studies. Further validation studies, especially on the quantitative polymerase chain reaction methods, need to be performed to cover the rising demand for new extraction methods and other background matrixes, as well as for novel GMO constructs.

  10. Ion-acoustic supersolitons and double layers in plasmas with nonthermal electrons

    NASA Astrophysics Data System (ADS)

    Gao, D.-N.; Zhang, J.; Yang, Y.; Duan, W.-S.

    2017-08-01

    Supersoliton (SS) can be mainly featured in two ways, namely, by focusing on subsidiary maxima on its electric field or by meeting the requirement that the appropriate Sagdeev pseudopotential (SP) has three local extrema between the equilibrium conditions and its amplitude. In this paper, by using the SP method, double layers and ion-acoustic SSs are studied in a plasma with Maxwellian cold electrons, nonthermal hot electrons, and fluid ions. The existence of the SS regime in parameter space is obtained in a methodical fashion. The existence domains for positive solitary waves are also presented. It is found that there is no SSs at the acoustic speed.

  11. Single kernel method for detection of 2-acetyl-1-pyrroline in aromatic rice germplasm using SPME-GC/MS

    USDA-ARS?s Scientific Manuscript database

    INTRODUCTION Aromatic rice or fragrant rice, (Oryza sativa L.), has a strong popcorn-like aroma due to the presence of a five-membered N-heterocyclic ring compound known as 2-acetyl-1-pyrroline (2-AP). To date, existing methods for detecting this compound in rice require the use of several kernels. ...

  12. Modeling the Mental Health Workforce in Washington State: Using State Licensing Data to Examine Provider Supply in Rural and Urban Areas

    ERIC Educational Resources Information Center

    Baldwin, Laura-Mae; Patanian, Miriam M.; Larson, Eric H.; Lishner, Denise M.; Mauksch, Larry B.; Katon, Wayne J.; Walker, Edward; Hart, L. Gary

    2006-01-01

    Context: Ensuring an adequate mental health provider supply in rural and urban areas requires accessible methods of identifying provider types, practice locations, and practice productivity. Purpose: To identify mental health shortage areas using existing licensing and survey data. Methods: The 1998-1999 Washington State Department of Health files…

  13. Mechanistic-empirical asphalt overlay thickness design and analysis system.

    DOT National Transportation Integrated Search

    2009-10-01

    The placement of an asphalt overlay is the most common method used by the Texas Department of Transportation (TxDOT) to rehabilitate : existing asphalt and concrete pavements. The type of overlay and its required thickness are important decisions tha...

  14. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-10-01

    In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.

  15. A novel knowledge-based potential for RNA 3D structure evaluation

    NASA Astrophysics Data System (ADS)

    Yang, Yi; Gu, Qi; Zhang, Ben-Gong; Shi, Ya-Zhou; Shao, Zhi-Gang

    2018-03-01

    Ribonucleic acids (RNAs) play a vital role in biology, and knowledge of their three-dimensional (3D) structure is required to understand their biological functions. Recently structural prediction methods have been developed to address this issue, but a series of RNA 3D structures are generally predicted by most existing methods. Therefore, the evaluation of the predicted structures is generally indispensable. Although several methods have been proposed to assess RNA 3D structures, the existing methods are not precise enough. In this work, a new all-atom knowledge-based potential is developed for more accurately evaluating RNA 3D structures. The potential not only includes local and nonlocal interactions but also fully considers the specificity of each RNA by introducing a retraining mechanism. Based on extensive test sets generated from independent methods, the proposed potential correctly distinguished the native state and ranked near-native conformations to effectively select the best. Furthermore, the proposed potential precisely captured RNA structural features such as base-stacking and base-pairing. Comparisons with existing potential methods show that the proposed potential is very reliable and accurate in RNA 3D structure evaluation. Project supported by the National Science Foundation of China (Grants Nos. 11605125, 11105054, 11274124, and 11401448).

  16. A classical Perron method for existence of smooth solutions to boundary value and obstacle problems for degenerate-elliptic operators via holomorphic maps

    NASA Astrophysics Data System (ADS)

    Feehan, Paul M. N.

    2017-09-01

    We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of continuous subsolutions and supersolutions for boundary value and obstacle problems for degenerate-elliptic operators, and maximum and comparison principle estimates previously developed by the author [13].

  17. Advanced Extra-Vehicular Activity Pressure Garment Requirements Development

    NASA Technical Reports Server (NTRS)

    Ross, Amy; Aitchison, Lindsay; Rhodes, Richard

    2015-01-01

    The NASA Johnson Space Center advanced pressure garment technology development team is addressing requirements development for exploration missions. Lessons learned from the Z-2 high fidelity prototype development have reiterated that clear low-level requirements and verification methods reduce risk to the government, improve efficiency in pressure garment design efforts, and enable the government to be a smart buyer. The expectation is to provide requirements at the specification level that are validated so that their impact on pressure garment design is understood. Additionally, the team will provide defined verification protocols for the requirements. However, in reviewing exploration space suit high level requirements there are several gaps in the team's ability to define and verify related lower level requirements. This paper addresses the efforts in requirement areas such as mobility/fit/comfort and environmental protection (dust, radiation, plasma, secondary impacts) to determine the method by which the requirements can be defined and use of those methods for verification. Gaps exist at various stages. In some cases component level work is underway, but no system level effort has begun; in other cases no effort has been initiated to close the gap. Status of on-going efforts and potential approaches to open gaps are discussed.

  18. Optical image encryption by random shifting in fractional Fourier domains

    NASA Astrophysics Data System (ADS)

    Hennelly, B.; Sheridan, J. T.

    2003-02-01

    A number of methods have recently been proposed in the literature for the encryption of two-dimensional information by use of optical systems based on the fractional Fourier transform. Typically, these methods require random phase screen keys for decrypting the data, which must be stored at the receiver and must be carefully aligned with the received encrypted data. A new technique based on a random shifting, or jigsaw, algorithm is proposed. This method does not require the use of phase keys. The image is encrypted by juxtaposition of sections of the image in fractional Fourier domains. The new method has been compared with existing methods and shows comparable or superior robustness to blind decryption. Optical implementation is discussed, and the sensitivity of the various encryption keys to blind decryption is examined.

  19. Max-margin multiattribute learning with low-rank constraint.

    PubMed

    Zhang, Qiang; Chen, Lin; Li, Baoxin

    2014-07-01

    Attribute learning has attracted a lot of interests in recent years for its advantage of being able to model high-level concepts with a compact set of midlevel attributes. Real-world objects often demand multiple attributes for effective modeling. Most existing methods learn attributes independently without explicitly considering their intrinsic relatedness. In this paper, we propose max margin multiattribute learning with low-rank constraint, which learns a set of attributes simultaneously, using only relative ranking of the attributes for the data. By learning all the attributes simultaneously through low-rank constraint, the proposed method is able to capture their intrinsic correlation for improved learning; by requiring only relative ranking, the method avoids restrictive binary labels of attributes that are often assumed by many existing techniques. The proposed method is evaluated on both synthetic data and real visual data including a challenging video data set. Experimental results demonstrate the effectiveness of the proposed method.

  20. A new Lagrangian random choice method for steady two-dimensional supersonic/hypersonic flow

    NASA Technical Reports Server (NTRS)

    Loh, C. Y.; Hui, W. H.

    1991-01-01

    Glimm's (1965) random choice method has been successfully applied to compute steady two-dimensional supersonic/hypersonic flow using a new Lagrangian formulation. The method is easy to program, fast to execute, yet it is very accurate and robust. It requires no grid generation, resolves slipline and shock discontinuities crisply, can handle boundary conditions most easily, and is applicable to hypersonic as well as supersonic flow. It represents an accurate and fast alternative to the existing Eulerian methods. Many computed examples are given.

  1. Configurations and calibration methods for passive sampling techniques.

    PubMed

    Ouyang, Gangfeng; Pawliszyn, Janusz

    2007-10-19

    Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.

  2. EsPRit: ethics committee proposals for Long Term Medical Data Registries in rapidly evolving research fields - a future-proof best practice approach.

    PubMed

    Oberbichler, S; Hackl, W O; Hörbst, A

    2017-10-18

    Long-term data collection is a challenging task in the domain of medical research. Many effects in medicine require long periods of time to become traceable e.g. the development of secondary malignancies based on a given radiotherapeutic treatment of the primary disease. Nevertheless, long-term studies often suffer from an initial lack of available information, thus disallowing a standardized approach for their approval by the ethics committee. This is due to several factors, such as the lack of existing case report forms or an explorative research approach in which data elements may change over time. In connection with current medical research and the ongoing digitalization in medicine, Long Term Medical Data Registries (MDR-LT) have become an important means of collecting and analyzing study data. As with any clinical study, ethical aspects must be taken into account when setting up such registries. This work addresses the problem of creating a valid, high-quality ethics committee proposal for medical registries by suggesting groups of tasks (building blocks), information sources and appropriate methods for collecting and analyzing the information, as well as a process model to compile an ethics committee proposal (EsPRit). To derive the building blocks and associated methods software and requirements engineering approaches were utilized. Furthermore, a process-oriented approach was chosen, as information required in the creating process of ethics committee proposals remain unknown in the beginning of planning an MDR-LT. Here, we derived the needed steps from medical product certification. This was done as the medical product certification itself also communicates a process-oriented approach rather than merely focusing on content. A proposal was created for validation and inspection of applicability by using the proposed building blocks. The proposed best practice was tested and refined within SEMPER (Secondary Malignoma - Prospective Evaluation of the Radiotherapeutics dose distribution as the cause for induction) as a case study. The proposed building blocks cover the topics of "Context Analysis", "Requirements Analysis", "Requirements Validation", "Electronic Case Report (eCRF) Design" and "Overall Concept Creation". Additional methods are attached with regards to each topic. The goals of each block can be met by applying those methods. The proposed methods are proven methods as applied in e.g. existing Medical Data Registry projects, as well as in software or requirements engineering. Several building blocks and attached methods could be identified in the creation of a generic ethics committee proposal. Hence, an Ethics Committee can make informed decisions on the suggested study via said blocks, using the suggested methods such as "Defining Clinical Questions" within the Context Analysis. The study creators have to confirm that they adhere to the proposed procedure within the ethic proposal statement. Additional existing Medical Data Registry projects can be compared to EsPRit for conformity to the proposed procedure. This allows for the identification of gaps, which can lead to amendments requested by the ethics committee.

  3. Wilderness campsite monitoring methods: a sourcebook

    Treesearch

    David N. Cole

    1989-01-01

    Summarizes information on techniques available for monitoring the condition of campsites, particularly those in wilderness. A variety of techniques are described and evaluated; sources of information are also listed. Problems with existing monitoring systems and places where refinement of technique is required are highlighted.

  4. Damage identification using inverse methods.

    PubMed

    Friswell, Michael I

    2007-02-15

    This paper gives an overview of the use of inverse methods in damage detection and location, using measured vibration data. Inverse problems require the use of a model and the identification of uncertain parameters of this model. Damage is often local in nature and although the effect of the loss of stiffness may require only a small number of parameters, the lack of knowledge of the location means that a large number of candidate parameters must be included. This paper discusses a number of problems that exist with this approach to health monitoring, including modelling error, environmental effects, damage localization and regularization.

  5. OPTiM: Optical projection tomography integrated microscope using open-source hardware and software

    PubMed Central

    Andrews, Natalie; Davis, Samuel; Bugeon, Laurence; Dallman, Margaret D.; McGinty, James

    2017-01-01

    We describe the implementation of an OPT plate to perform optical projection tomography (OPT) on a commercial wide-field inverted microscope, using our open-source hardware and software. The OPT plate includes a tilt adjustment for alignment and a stepper motor for sample rotation as required by standard projection tomography. Depending on magnification requirements, three methods of performing OPT are detailed using this adaptor plate: a conventional direct OPT method requiring only the addition of a limiting aperture behind the objective lens; an external optical-relay method allowing conventional OPT to be performed at magnifications >4x; a remote focal scanning and region-of-interest method for improved spatial resolution OPT (up to ~1.6 μm). All three methods use the microscope’s existing incoherent light source (i.e. arc-lamp) and all of its inherent functionality is maintained for day-to-day use. OPT acquisitions are performed on in vivo zebrafish embryos to demonstrate the implementations’ viability. PMID:28700724

  6. A novel algorithm for the reconstruction of an entrance beam fluence from treatment exit patient portal dosimetry images

    NASA Astrophysics Data System (ADS)

    Sperling, Nicholas Niven

    The problem of determining the in vivo dosimetry for patients undergoing radiation treatment has been an area of interest since the development of the field. Most methods which have found clinical acceptance work by use of a proxy dosimeter, e.g.: glass rods, using radiophotoluminescence; thermoluminescent dosimeters (TLD), typically CaF or LiF; Metal Oxide Silicon Field Effect Transistor (MOSFET) dosimeters, using threshold voltage shift; Optically Stimulated Luminescent Dosimeters (OSLD), composed of Carbon doped Aluminum Dioxide crystals; RadioChromic film, using leuko-dye polymers; Silicon Diode dosimeters, typically p-type; and ion chambers. More recent methods employ Electronic Portal Image Devices (EPID), or dosimeter arrays, for entrance or exit beam fluence determination. The difficulty with the proxy in vivo dosimetery methods is the requirement that they be placed at the particular location where the dose is to be determined. This precludes measurements across the entire patient volume. These methods are best suited where the dose at a particular location is required. The more recent methods of in vivo dosimetry make use of detector arrays and reconstruction techniques to determine dose throughout the patient volume. One method uses an array of ion chambers located upstream of the patient. This requires a special hardware device and places an additional attenuator in the beam path, which may not be desirable. A final approach is to use the existing EPID, which is part of most modern linear accelerators, to image the patient using the treatment beam. Methods exist to deconvolve the detector function of the EPID using a series of weighted exponentials. Additionally, this method has been extended to determine in vivo dosimetry. The method developed here employs the use of EPID images and an iterative deconvolution algorithm to reconstruct the impinging primary beam fluence on the patient. This primary fluence may then be employed to determine dose through the entire patient volume. The method requires patient specific information, including a CT for deconvolution/dose reconstruction. With the large-scale adoption of Cone Beam CT (CBCT) systems on modern linear accelerators, a treatment time CT is readily available for use in this deconvolution and in dose representation.

  7. A Framework For Dynamic Subversion

    DTIC Science & Technology

    2003-06-01

    informal methods. These methods examine the security requirements, security specification, also called the Formal Top Level Specification and its ...not be always invoked due to its possible deactivation by errant or malicious code. Further, the RVM, if no separation exists between the kernel...that this thesis focused on, is the means by which the dynamic portion of the artifice finds space to operate or is loaded, is relocated in its

  8. Evaluation of methods for determining hardware projected life

    NASA Technical Reports Server (NTRS)

    1971-01-01

    An investigation of existing methods of predicting hardware life is summarized by reviewing programs having long life requirements, current research efforts on long life problems, and technical papers reporting work on life predicting techniques. The results indicate that there are no accurate quantitative means to predict hardware life for system level hardware. The effectiveness of test programs and the cause of hardware failures is considered.

  9. Mapping the ecological networks of microbial communities.

    PubMed

    Xiao, Yandong; Angulo, Marco Tulio; Friedman, Jonathan; Waldor, Matthew K; Weiss, Scott T; Liu, Yang-Yu

    2017-12-11

    Mapping the ecological networks of microbial communities is a necessary step toward understanding their assembly rules and predicting their temporal behavior. However, existing methods require assuming a particular population dynamics model, which is not known a priori. Moreover, those methods require fitting longitudinal abundance data, which are often not informative enough for reliable inference. To overcome these limitations, here we develop a new method based on steady-state abundance data. Our method can infer the network topology and inter-taxa interaction types without assuming any particular population dynamics model. Additionally, when the population dynamics is assumed to follow the classic Generalized Lotka-Volterra model, our method can infer the inter-taxa interaction strengths and intrinsic growth rates. We systematically validate our method using simulated data, and then apply it to four experimental data sets. Our method represents a key step towards reliable modeling of complex, real-world microbial communities, such as the human gut microbiota.

  10. Fitting methods to paradigms: are ergonomics methods fit for systems thinking?

    PubMed

    Salmon, Paul M; Walker, Guy H; M Read, Gemma J; Goode, Natassia; Stanton, Neville A

    2017-02-01

    The issues being tackled within ergonomics problem spaces are shifting. Although existing paradigms appear relevant for modern day systems, it is worth questioning whether our methods are. This paper asks whether the complexities of systems thinking, a currently ubiquitous ergonomics paradigm, are outpacing the capabilities of our methodological toolkit. This is achieved through examining the contemporary ergonomics problem space and the extent to which ergonomics methods can meet the challenges posed. Specifically, five key areas within the ergonomics paradigm of systems thinking are focused on: normal performance as a cause of accidents, accident prediction, system migration, systems concepts and ergonomics in design. The methods available for pursuing each line of inquiry are discussed, along with their ability to respond to key requirements. In doing so, a series of new methodological requirements and capabilities are identified. It is argued that further methodological development is required to provide researchers and practitioners with appropriate tools to explore both contemporary and future problems. Practitioner Summary: Ergonomics methods are the cornerstone of our discipline. This paper examines whether our current methodological toolkit is fit for purpose given the changing nature of ergonomics problems. The findings provide key research and practice requirements for methodological development.

  11. Survey of Existing and Promising New Methods of Surface Preparation

    DTIC Science & Technology

    1982-04-01

    and abroad, a description and analysis are givev of applicable methods including: • Equipment employing recycled steel shot and grit. • wet blast...requirements that must be met by these methods. 23. Barrillom, P., “Preservation of Materials in the Marine Environment— Analysis of Replies TO The Enquiry on...conditions, can hydrolyze or give sulfuric acid, causing renewed corrosion. Wet blasting or the use of high pressure water jets appears to be useful in

  12. Micro-optical-mechanical system photoacoustic spectrometer

    DOEpatents

    Kotovsky, Jack; Benett, William J.; Tooker, Angela C.; Alameda, Jennifer B.

    2013-01-01

    All-optical photoacoustic spectrometer sensing systems (PASS system) and methods include all the hardware needed to analyze the presence of a large variety of materials (solid, liquid and gas). Some of the all-optical PASS systems require only two optical-fibers to communicate with the opto-electronic power and readout systems that exist outside of the material environment. Methods for improving the signal-to-noise are provided and enable mirco-scale systems and methods for operating such systems.

  13. Agent-based Training: Facilitating Knowledge and Skill Acquisition in a Modern Space Operations Team

    DTIC Science & Technology

    2002-04-01

    face, and being careful to not add to existing problems such as limited display space. This required us to work closely with members of the SBIRS operational community and use research tools such as cognitive task analysis methods.

  14. Cost and benefits design optimization model for fault tolerant flight control systems

    NASA Technical Reports Server (NTRS)

    Rose, J.

    1982-01-01

    Requirements and specifications for a method of optimizing the design of fault-tolerant flight control systems are provided. Algorithms that could be used for developing new and modifying existing computer programs are also provided, with recommendations for follow-on work.

  15. Training of U.S. Air Traffic Controllers. (IDA Report No. R-206).

    ERIC Educational Resources Information Center

    Henry, James H.; And Others

    The report reviews the evolution of existing national programs for air traffic controller training, estimates the number of persons requiring developmental and supplementary training, examines present controller selection and training programs, investigates performance measurement methods, considers standardization and quality control, discusses…

  16. Urban topography for flood modeling by fusion of OpenStreetMap, SRTM and local knowledge

    NASA Astrophysics Data System (ADS)

    Winsemius, Hessel; Donchyts, Gennadii; Eilander, Dirk; Chen, Jorik; Leskens, Anne; Coughlan, Erin; Mawanda, Shaban; Ward, Philip; Diaz Loaiza, Andres; Luo, Tianyi; Iceland, Charles

    2016-04-01

    Topography data is essential for understanding and modeling of urban flood hazard. Within urban areas, much of the topography is defined by highly localized man-made features such as roads, channels, ditches, culverts and buildings. This results in the requirement that urban flood models require high resolution topography, and water conveying connections within the topography are considered. In recent years, more and more topography information is collected through LIDAR surveys however there are still many cities in the world where high resolution topography data is not available. Furthermore, information on connectivity is required for flood modelling, even when LIDAR data are used. In this contribution, we demonstrate how high resolution terrain data can be synthesized using a fusion between features in OpenStreetMap (OSM) data (including roads, culverts, channels and buildings) and existing low resolution and noisy SRTM elevation data using the Google Earth Engine platform. Our method uses typical existing OSM properties to estimate heights and topology associated with the features, and uses these to correct noise and burn features on top of the existing low resolution SRTM elevation data. The method has been setup in the Google Earth Engine platform so that local stakeholders and mapping teams can on-the-fly propose, include and visualize the effect of additional features and properties of features, which are deemed important for topography and water conveyance. These features can be included in a workshop environment. We pilot our tool over Dar Es Salaam.

  17. Thermal/structural design verification strategies for large space structures

    NASA Technical Reports Server (NTRS)

    Benton, David

    1988-01-01

    Requirements for space structures of increasing size, complexity, and precision have engendered a search for thermal design verification methods that do not impose unreasonable costs, that fit within the capabilities of existing facilities, and that still adequately reduce technical risk. This requires a combination of analytical and testing methods. This requires two approaches. The first is to limit thermal testing to sub-elements of the total system only in a compact configuration (i.e., not fully deployed). The second approach is to use a simplified environment to correlate analytical models with test results. These models can then be used to predict flight performance. In practice, a combination of these approaches is needed to verify the thermal/structural design of future very large space systems.

  18. Extended duration Orbiter life support definition

    NASA Technical Reports Server (NTRS)

    Kleiner, G. N.; Thompson, C. D.

    1978-01-01

    Extending the baseline seven-day Orbiter mission to 30 days or longer and operating with a solar power module as the primary source for electrical power requires changes to the existing environmental control and life support (ECLS) system. The existing ECLS system imposes penalties on longer missions which limit the Orbiter capabilities and changes are required to enhance overall mission objectives. Some of these penalties are: large quantities of expendables, the need to dump or store large quantities of waste material, the need to schedule fuel cell operation, and a high landing weight penalty. This paper presents the study ground rules and examines the limitations of the present ECLS system against Extended Duration Orbiter mission requirements. Alternate methods of accomplishing ECLS functions for the Extended Duration Orbiter are discussed. The overall impact of integrating these options into the Orbiter are evaluated and significant Orbiter weight and volume savings with the recommended approaches are described.

  19. Information in medical decision making: how consistent is our management?

    PubMed

    Lorence, Daniel P; Spink, Amanda; Jameson, Robert

    2002-01-01

    The use of outcomes data in clinical environments requires a correspondingly greater variety of information used in decision making, the measurement of quality, and clinical performance. As information becomes integral in the decision-making process, trustworthy decision support data are required. Using data from a national census of certified health information managers, variation in automated data quality management practices was examined. Relatively low overall adoption of automated data management exists in health care organizations, with significant geographic and practice setting variation. Nonuniform regional adoption of computerized data management exists, despite national mandates that promote and in some cases require uniform adoption. Overall, a significant number of respondents (42.7%) indicated that they had not adopted policies and procedures to direct the timeliness of data capture, with 57.3% having adopted such practices. The inconsistency of patient data policy suggests that provider organizations do not use uniform information management methods, despite growing federal mandates to do so.

  20. Initial Assessment of U.S. Refineries for Purposes of Potential Bio-Based Oil Insertions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Charles J.; Jones, Susanne B.; Padmaperuma, Asanga B.

    2013-04-01

    In order to meet U.S. biofuel objectives over the coming decade the conversion of a broad range of biomass feedstocks, using diverse processing options, will be required. Further, the production of both gasoline and diesel biofuels will employ biomass conversion methods that produce wide boiling range intermediate oils requiring treatment similar to conventional refining processes (i.e. fluid catalytic cracking, hydrocracking, and hydrotreating). As such, it is widely recognized that leveraging existing U.S. petroleum refining infrastructure is key to reducing overall capital demands. This study examines how existing U.S. refining location, capacities and conversion capabilities match in geography and processing capabilitiesmore » with the needs projected from anticipated biofuels production.« less

  1. Design enhancement tools in MSC/NASTRAN

    NASA Technical Reports Server (NTRS)

    Wallerstein, D. V.

    1984-01-01

    Design sensitivity is the calculation of derivatives of constraint functions with respect to design variables. While a knowledge of these derivatives is useful in its own right, the derivatives are required in many efficient optimization methods. Constraint derivatives are also required in some reanalysis methods. It is shown where the sensitivity coefficients fit into the scheme of a basic organization of an optimization procedure. The analyzer is to be taken as MSC/NASTRAN. The terminator program monitors the termination criteria and ends the optimization procedure when the criteria are satisfied. This program can reside in several plances: in the optimizer itself, in a user written code, or as part of the MSC/EOS (Engineering Operating System) MSC/EOS currently under development. Since several excellent optimization codes exist and since they require such very specialized technical knowledge, the optimizer under the new MSC/EOS is considered to be selected and supplied by the user to meet his specific needs and preferences. The one exception to this is a fully stressed design (FSD) based on simple scaling. The gradients are currently supplied by various design sensitivity options now existing in MSC/NASTRAN's design sensitivity analysis (DSA).

  2. Straightening: existence, uniqueness and stability

    PubMed Central

    Destrade, M.; Ogden, R. W.; Sgura, I.; Vergori, L.

    2014-01-01

    One of the least studied universal deformations of incompressible nonlinear elasticity, namely the straightening of a sector of a circular cylinder into a rectangular block, is revisited here and, in particular, issues of existence and stability are addressed. Particular attention is paid to the system of forces required to sustain the large static deformation, including by the application of end couples. The influence of geometric parameters and constitutive models on the appearance of wrinkles on the compressed face of the block is also studied. Different numerical methods for solving the incremental stability problem are compared and it is found that the impedance matrix method, based on the resolution of a matrix Riccati differential equation, is the more precise. PMID:24711723

  3. Analysis of Existing Guidelines for the Systematic Planning Process of Clinical Registries.

    PubMed

    Löpprich, Martin; Knaup, Petra

    2016-01-01

    Clinical registries are a powerful method to observe the clinical practice and natural disease history. In contrast to clinical trials, where guidelines and standardized methods exist and are mandatory, only a few initiatives have published methodological guidelines for clinical registries. The objective of this paper was to review these guidelines and systematically assess their completeness, usability and feasibility according to a SWOT analysis. The results show that each guideline has its own strengths and weaknesses. While one supports the systematic planning process, the other discusses clinical registries in great detail. However, the feasibility was mostly limited and the special requirements of clinical registries, their flexible, expandable and adaptable technological structure was not addressed consistently.

  4. Management of Dynamic Biomedical Terminologies: Current Status and Future Challenges

    PubMed Central

    Dos Reis, J. C.; Pruski, C.

    2015-01-01

    Summary Objectives Controlled terminologies and their dependent artefacts provide a consensual understanding of a domain while reducing ambiguities and enabling reasoning. However, the evolution of a domain’s knowledge directly impacts these terminologies and generates inconsistencies in the underlying biomedical information systems. In this article, we review existing work addressing the dynamic aspect of terminologies as well as their effects on mappings and semantic annotations. Methods We investigate approaches related to the identification, characterization and propagation of changes in terminologies, mappings and semantic annotations including techniques to update their content. Results and conclusion Based on the explored issues and existing methods, we outline open research challenges requiring investigation in the near future. PMID:26293859

  5. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  6. A new method for constructing analytic elements for groundwater flow.

    NASA Astrophysics Data System (ADS)

    Strack, O. D.

    2007-12-01

    The analytic element method is based upon the superposition of analytic functions that are defined throughout the infinite domain, and can be used to meet a variety of boundary conditions. Analytic elements have been use successfully for a number of problems, mainly dealing with the Poisson equation (see, e.g., Theory and Applications of the Analytic Element Method, Reviews of Geophysics, 41,2/1005 2003 by O.D.L. Strack). The majority of these analytic elements consists of functions that exhibit jumps along lines or curves. Such linear analytic elements have been developed also for other partial differential equations, e.g., the modified Helmholz equation and the heat equation, and were constructed by integrating elementary solutions, the point sink and the point doublet, along a line. This approach is limiting for two reasons. First, the existence is required of the elementary solutions, and, second, the integration tends to limit the range of solutions that can be obtained. We present a procedure for generating analytic elements that requires merely the existence of a harmonic function with the desired properties; such functions exist in abundance. The procedure to be presented is used to generalize this harmonic function in such a way that the resulting expression satisfies the applicable differential equation. The approach will be applied, along with numerical examples, for the modified Helmholz equation and for the heat equation, while it is noted that the method is in no way restricted to these equations. The procedure is carried out entirely in terms of complex variables, using Wirtinger calculus.

  7. Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.

    2015-12-01

    Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.

  8. Future needs for biomedical transducers

    NASA Technical Reports Server (NTRS)

    Wooten, F. T.

    1971-01-01

    In summary there are three major classes of transducer improvements required: improvements in existing transducers, needs for unexploited physical science phenomena in transducer design, and needs for unutilized physiological phenomena in transducer design. During the next decade, increasing emphasis will be placed on noninvasive measurement in all of these areas. Patient safety, patient comfort, and the need for efficient utilization of the time of both patient and physician requires that noninvasive methods of monitoring be developed.

  9. Audiology practice management in South Africa: What audiologists know and what they should know

    PubMed Central

    Kritzinger, Alta; Soer, Maggi

    2015-01-01

    Background In future, the South African Department of Health aims to purchase services from accredited private service providers. Successful private audiology practices can assist to address issues of access, equity and quality of health services. It is not sufficient to be an excellent clinician, since audiology practices are businesses that must also be managed effectively. Objective The objective was to determine the existing and required levels of practice management knowledge as perceived by South African audiologists. Method An electronic descriptive survey was used to investigate audiology practice management amongst South African audiologists. A total of 147 respondents completed the survey. Results were analysed by calculating descriptive statistics. The Z-proportional test was used to identify significant differences between existing and required levels of practice management knowledge. Results Significant differences were found between existing and required levels of knowledge regarding all eight practice management tasks, particularly legal and ethical issues and marketing and accounting. There were small differences in the knowledge required for practice management tasks amongst respondents working in public and private settings. Conclusion Irrespective of their work context, respondents showed that they need significant expansion of practice management knowledge in order to be successful, to compete effectively and to make sense of a complex marketplace. PMID:26809158

  10. Stepper motor

    NASA Technical Reports Server (NTRS)

    Dekramer, Cornelis

    1994-01-01

    The purpose of this document is to describe the more commonly used permanent magnet stepper motors for spaceflight. It will discuss the mechanical and electrical aspects of the devices, their torque behavior, those parameters which need to be controlled and measured, and test methods to be employed. It will also discuss torque margins, compare these to the existing margin requirements, and determine the applicability of these requirements. Finally it will attempt to generate a set of requirements which will be used in any stepper motor procurement and will fully characterize the stepper motor behavior in a consistent and repeatable fashion.

  11. Compensation of Verdet Constant Temperature Dependence by Crystal Core Temperature Measurement

    PubMed Central

    Petricevic, Slobodan J.; Mihailovic, Pedja M.

    2016-01-01

    Compensation of the temperature dependence of the Verdet constant in a polarimetric extrinsic Faraday sensor is of major importance for applying the magneto-optical effect to AC current measurements and magnetic field sensing. This paper presents a method for compensating the temperature effect on the Faraday rotation in a Bi12GeO20 crystal by sensing its optical activity effect on the polarization of a light beam. The method measures the temperature of the same volume of crystal that effects the beam polarization in a magnetic field or current sensing process. This eliminates the effect of temperature difference found in other indirect temperature compensation methods, thus allowing more accurate temperature compensation for the temperature dependence of the Verdet constant. The method does not require additional changes to an existing Δ/Σ configuration and is thus applicable for improving the performance of existing sensing devices. PMID:27706043

  12. Evaluation of techniques for increasing recall in a dictionary approach to gene and protein name identification.

    PubMed

    Schuemie, Martijn J; Mons, Barend; Weeber, Marc; Kors, Jan A

    2007-06-01

    Gene and protein name identification in text requires a dictionary approach to relate synonyms to the same gene or protein, and to link names to external databases. However, existing dictionaries are incomplete. We investigate two complementary methods for automatic generation of a comprehensive dictionary: combination of information from existing gene and protein databases and rule-based generation of spelling variations. Both methods have been reported in literature before, but have hitherto not been combined and evaluated systematically. We combined gene and protein names from several existing databases of four different organisms. The combined dictionaries showed a substantial increase in recall on three different test sets, as compared to any single database. Application of 23 spelling variation rules to the combined dictionaries further increased recall. However, many rules appeared to have no effect and some appear to have a detrimental effect on precision.

  13. Strengthening of Existing Bridge Structures for Shear and Bending with Carbon Textile-Reinforced Mortar.

    PubMed

    Herbrand, Martin; Adam, Viviane; Classen, Martin; Kueres, Dominik; Hegger, Josef

    2017-09-19

    Increasing traffic loads and changes in code provisions lead to deficits in shear and flexural capacity of many existing highway bridges. Therefore, a large number of structures are expected to require refurbishment and strengthening in the future. This projection is based on the current condition of many older road bridges. Different strengthening methods for bridges exist to extend their service life, all having specific advantages and disadvantages. By applying a thin layer of carbon textile-reinforced mortar (CTRM) to bridge deck slabs and the webs of pre-stressed concrete bridges, the fatigue and ultimate strength of these members can be increased significantly. The CTRM layer is a combination of a corrosion resistant carbon fiber reinforced polymer (CFRP) fabric and an efficient mortar. In this paper, the strengthening method and the experimental results obtained at RWTH Aachen University are presented.

  14. Strengthening of Existing Bridge Structures for Shear and Bending with Carbon Textile-Reinforced Mortar

    PubMed Central

    Herbrand, Martin; Classen, Martin; Kueres, Dominik; Hegger, Josef

    2017-01-01

    Increasing traffic loads and changes in code provisions lead to deficits in shear and flexural capacity of many existing highway bridges. Therefore, a large number of structures are expected to require refurbishment and strengthening in the future. This projection is based on the current condition of many older road bridges. Different strengthening methods for bridges exist to extend their service life, all having specific advantages and disadvantages. By applying a thin layer of carbon textile-reinforced mortar (CTRM) to bridge deck slabs and the webs of pre-stressed concrete bridges, the fatigue and ultimate strength of these members can be increased significantly. The CTRM layer is a combination of a corrosion resistant carbon fiber reinforced polymer (CFRP) fabric and an efficient mortar. In this paper, the strengthening method and the experimental results obtained at RWTH Aachen University are presented. PMID:28925962

  15. A modified form of conjugate gradient method for unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa

    2016-06-01

    Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.

  16. Assessing FAÇADE Visibility in 3d City Models for City Marketing

    NASA Astrophysics Data System (ADS)

    Albrecht, F.; Moser, J.; Hijazi, I.

    2013-08-01

    In city marketing, different applications require the evaluation of the visual impression of displays in the urban environment on people that visit the city. Therefore, this research focuses on the way how visual displays on façades for movie performances are perceived during a cultural event triggered by city marketing. We describe the different visibility analysis methods that are applicable to the analysis of façades. The methods advanced from the domains of Geographic Information Science, architecture and computer graphics. A detailed scenario is described in order to perform a requirements analysis for identifying the requirements to visibility information. This visibility information needs to describe the visual perception of displays on façades adequately. The requirements are compared to the visibility information that can be provided by the visibility methods. A discussion of the comparison summarizes the advantages and disadvantages of existing visibility analysis methods for describing the visibility of façades. The results show that part of the researched approaches is able to support the requirements to visibility information. But they also show that for a complete support of the entire analysis workflow, there remain unsolved workflow integration issues.

  17. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  18. OPTIMIZING POTENTIAL GREEN REPLACEMENT CHEMICALS – BALANCING FUNCTION AND RISK

    EPA Science Inventory

    An important focus of green chemistry is the design of new chemicals that are inherently less toxic than the ones they might replace, but still retain required functional properties. A variety of methods exist to measure or model both functional and toxicity surrogates that could...

  19. Noncontaminating technique for making holes in existing process systems

    NASA Technical Reports Server (NTRS)

    Hecker, T. P.; Czapor, H. P.; Giordano, S. M.

    1972-01-01

    Technique is developed for making cleanly-contoured holes in assembled process systems without introducing chips or other contaminants into system. Technique uses portable equipment and does not require dismantling of system. Method was tested on Inconel, stainless steel, ASTMA-53, and Hastelloy X in all positions.

  20. Correlation of rapid hydrometer analysis for select material to existing procedure LDH-TR-407-66 : final report.

    DOT National Transportation Integrated Search

    1968-05-01

    Conditions arise during construction of bases with Portland cement stabilized soils which require close programming of work. Therefore, time is of significant importance. : That is the objective of this report; to evaluate a method by which considera...

  1. Development of a press and drag method for hyperlink selection on smartphones.

    PubMed

    Chang, Joonho; Jung, Kihyo

    2017-11-01

    The present study developed a novel touch method for hyperlink selection on smartphones consisting of two sequential finger interactions: press and drag motions. The novel method requires a user to press a target hyperlink, and if a touch error occurs he/she can immediately correct the touch error by dragging the finger without releasing it in the middle. The method was compared with two existing methods in terms of completion time, error rate, and subjective rating. Forty college students participated in the experiments with different hyperlink sizes (4-pt, 6-pt, 8-pt, and 10-pt) on a touch-screen device. When hyperlink size was small (4-pt and 6-pt), the novel method (time: 826 msec; error: 0.6%) demonstrated better completion time and error rate than the current method (time: 1194 msec; error: 22%). In addition, the novel method (1.15, slightly satisfied, in 7-pt bipolar scale) had significantly higher satisfaction scores than the two existing methods (0.06, neutral). Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Development of functional requirements for electronic health communication: preliminary results from the ELIN project.

    PubMed

    Christensen, Tom; Grimsmo, Anders

    2005-01-01

    User participation is important for developing a functional requirements specification for electronic communication. General practitioners and practising specialists, however, often work in small practices without the resources to develop and present their requirements. It was necessary to find a method that could engage practising doctors in order to promote their needs related to electronic communication. Qualitative research methods were used, starting a process to develop and study documents and collect data from meetings in project groups. Triangulation was used, in that the participants were organised into a panel of experts, a user group, a supplier group and an editorial committee. The panel of experts created a list of functional requirements for electronic communication in health care, consisting of 197 requirements, in addition to 67 requirements selected from an existing Norwegian standard for electronic patient records (EPRs). Elimination of paper copies sent in parallel with electronic messages, optimal workflow, a common electronic 'envelope' with directory services for units and end-users, and defined requirements for content with the possibility of decision support were the most important requirements. The results indicate that we have found a method of developing functional requirements which provides valid results both for practising doctors and for suppliers of EPR systems.

  3. Optimum runway orientation relative to crosswinds

    NASA Technical Reports Server (NTRS)

    Falls, L. W.; Brown, S. C.

    1972-01-01

    Specific magnitudes of crosswinds may exist that could be constraints to the success of an aircraft mission such as the landing of the proposed space shuttle. A method is required to determine the orientation or azimuth of the proposed runway which will minimize the probability of certain critical crosswinds. Two procedures for obtaining the optimum runway orientation relative to minimizing a specified crosswind speed are described and illustrated with examples. The empirical procedure requires only hand calculations on an ordinary wind rose. The theoretical method utilizes wind statistics computed after the bivariate normal elliptical distribution is applied to a data sample of component winds. This method requires only the assumption that the wind components are bivariate normally distributed. This assumption seems to be reasonable. Studies are currently in progress for testing wind components for bivariate normality for various stations. The close agreement between the theoretical and empirical results for the example chosen substantiates the bivariate normal assumption.

  4. Qualitative Maintenance Experience Handbook

    DTIC Science & Technology

    1975-10-20

    differences in type and location of actuators results. DESIRABLE FEATURES: 1. The simpler assist methods are easier to get to usually and are smaller...the wheels differ somewhat in method of removal, there exists no particular features that would qualify as 4 "undesirable." 3. The AV-8 requires special... different airplanes, this survey identifies desirable and unde- sirable features evident in the various installations of the same compo- nent. In essence

  5. The Capability of Virtual Reality to Meet Military Requirements (la Capacite de la rea1ite virtuelle a repondre aux besoins militaires)

    DTIC Science & Technology

    2000-11-01

    importance of the sensation of presence, and cybersickness . The third day reviewed assessment methods and applications research. Speakers reviewed...of the sensation of presence, and cybersickness . The third day reviewed assessment methods and applications research. Speakers reviewed existing or...Reality technology. Presentations discussed sensory interfaces, measures of effectiveness, importance of the sensation of presence, and cybersickness

  6. Simplified stereo-optical ultrasound plane calibration

    NASA Astrophysics Data System (ADS)

    Hoßbach, Martin; Noll, Matthias; Wesarg, Stefan

    2013-03-01

    Image guided therapy is a natural concept and commonly used in medicine. In anesthesia, a common task is the injection of an anesthetic close to a nerve under freehand ultrasound guidance. Several guidance systems exist using electromagnetic tracking of the ultrasound probe as well as the needle, providing the physician with a precise projection of the needle into the ultrasound image. This, however, requires additional expensive devices. We suggest using optical tracking with miniature cameras attached to a 2D ultrasound probe to achieve a higher acceptance among physicians. The purpose of this paper is to present an intuitive method to calibrate freehand ultrasound needle guidance systems employing a rigid stereo camera system. State of the art methods are based on a complex series of error prone coordinate system transformations which makes them susceptible to error accumulation. By reducing the amount of calibration steps to a single calibration procedure we provide a calibration method that is equivalent, yet not prone to error accumulation. It requires a linear calibration object and is validated on three datasets utilizing di erent calibration objects: a 6mm metal bar and a 1:25mm biopsy needle were used for experiments. Compared to existing calibration methods for freehand ultrasound needle guidance systems, we are able to achieve higher accuracy results while additionally reducing the overall calibration complexity. Ke

  7. Nonideal isentropic gas flow through converging-diverging nozzles

    NASA Technical Reports Server (NTRS)

    Bober, W.; Chow, W. L.

    1990-01-01

    A method for treating nonideal gas flows through converging-diverging nozzles is described. The method incorporates the Redlich-Kwong equation of state. The Runge-Kutta method is used to obtain a solution. Numerical results were obtained for methane gas. Typical plots of pressure, temperature, and area ratios as functions of Mach number are given. From the plots, it can be seen that there exists a range of reservoir conditions that require the gas to be treated as nonideal if an accurate solution is to be obtained.

  8. Space station contamination control study: Internal combustion, phase 1

    NASA Technical Reports Server (NTRS)

    Ruggeri, Robert T.

    1987-01-01

    Contamination inside Space Station modules was studied to determine the best methods of controlling contamination. The work was conducted in five tasks that identified existing contamination control requirements, analyzed contamination levels, developed outgassing specification for materials, wrote a contamination control plan, and evaluated current materials of offgassing tests used by NASA. It is concluded that current contamination control methods can be made to function on the Space Station for up to 1000 days, but that current methods are deficient for periods longer than about 1000 days.

  9. Nonparametric Methods in Astronomy: Think, Regress, Observe—Pick Any Three

    NASA Astrophysics Data System (ADS)

    Steinhardt, Charles L.; Jermyn, Adam S.

    2018-02-01

    Telescopes are much more expensive than astronomers, so it is essential to minimize required sample sizes by using the most data-efficient statistical methods possible. However, the most commonly used model-independent techniques for finding the relationship between two variables in astronomy are flawed. In the worst case they can lead without warning to subtly yet catastrophically wrong results, and even in the best case they require more data than necessary. Unfortunately, there is no single best technique for nonparametric regression. Instead, we provide a guide for how astronomers can choose the best method for their specific problem and provide a python library with both wrappers for the most useful existing algorithms and implementations of two new algorithms developed here.

  10. The JPL functional requirements tool

    NASA Technical Reports Server (NTRS)

    Giffin, Geoff; Skinner, Judith; Stoller, Richard

    1987-01-01

    Planetary spacecraft are complex vehicles which are built according to many thousands of requirements. Problems encountered in documenting and maintaining these requirements led to the current attempt to reduce or eliminate these problems by a computer automated data base Functional Requirements Tool. The tool developed at JPL and in use on several JPL Projects is described. The organization and functionality of the Tool, together with an explanation of the data base inputs, their relationships, and use are presented. Methods of interfacing with external documents, representation of tables and figures, and methods of approval and change processing are discussed. The options available for disseminating information from the Tool are identified. The implementation of the Requirements Tool is outlined, and the operation is summarized. The conclusions drawn from this work is that the Requirements Tool represents a useful addition to the System Engineer's Tool kit, it is not currently available elsewhere, and a clear development path exists to expand the capabilities of the Tool to serve larger and more complex projects.

  11. Modified harmonic balance method for the solution of nonlinear jerk equations

    NASA Astrophysics Data System (ADS)

    Rahman, M. Saifur; Hasan, A. S. M. Z.

    2018-03-01

    In this paper, a second approximate solution of nonlinear jerk equations (third order differential equation) can be obtained by using modified harmonic balance method. The method is simpler and easier to carry out the solution of nonlinear differential equations due to less number of nonlinear equations are required to solve than the classical harmonic balance method. The results obtained from this method are compared with those obtained from the other existing analytical methods that are available in the literature and the numerical method. The solution shows a good agreement with the numerical solution as well as the analytical methods of the available literature.

  12. The atmosphere of Mars - Resources for the exploration and settlement of Mars

    NASA Technical Reports Server (NTRS)

    Meyer, T. R.; Mckay, C. P.

    1984-01-01

    This paper describes methods of processing the Mars atmosphere to supply water, oxygen and buffer gas for a Mars base. Existing life support system technology is combined with innovative methods of water extraction, and buffer gas processing. The design may also be extended to incorporate an integrated greenhouse to supply food, oxygen and water recycling. It is found that the work required to supply one kilogram of an argon/nitrogen buffer gas is 9.4 kW-hr. To extract water from the dry Martian atmosphere can require up to 102.8 kW-hr per kilogram of water depending on the relative humidity of the air.

  13. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    PubMed

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  14. Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies.

    PubMed

    Koch, Lisa M; Rajchl, Martin; Bai, Wenjia; Baumgartner, Christian F; Tong, Tong; Passerat-Palmbach, Jonathan; Aljabar, Paul; Rueckert, Daniel

    2017-08-22

    Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation.

  15. Highly efficient and exact method for parallelization of grid-based algorithms and its implementation in DelPhi

    PubMed Central

    Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil

    2012-01-01

    The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480

  16. Theory and computation of optimal low- and medium-thrust transfers

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.

    1994-01-01

    This report presents two numerical methods considered for the computation of fuel-optimal, low-thrust orbit transfers in large numbers of burns. The origins of these methods are observations made with the extremal solutions of transfers in small numbers of burns; there seems to exist a trend such that the longer the time allowed to perform an optimal transfer the less fuel that is used. These longer transfers are obviously of interest since they require a motor of low thrust; however, we also find a trend that the longer the time allowed to perform the optimal transfer the more burns are required to satisfy optimality. Unfortunately, this usually increases the difficulty of computation. Both of the methods described use small-numbered burn solutions to determine solutions in large numbers of burns. One method is a homotopy method that corrects for problems that arise when a solution requires a new burn or coast arc for optimality. The other method is to simply patch together long transfers from smaller ones. An orbit correction problem is solved to develop this method. This method may also lead to a good guidance law for transfer orbits with long transfer times.

  17. The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations

    PubMed Central

    Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka

    2011-01-01

    Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007

  18. BRDF invariant stereo using light transport constancy.

    PubMed

    Wang, Liang; Yang, Ruigang; Davis, James E

    2007-09-01

    Nearly all existing methods for stereo reconstruction assume that scene reflectance is Lambertian and make use of brightness constancy as a matching invariant. We introduce a new invariant for stereo reconstruction called light transport constancy (LTC), which allows completely arbitrary scene reflectance (bidirectional reflectance distribution functions (BRDFs)). This invariant can be used to formulate a rank constraint on multiview stereo matching when the scene is observed by several lighting configurations in which only the lighting intensity varies. In addition, we show that this multiview constraint can be used with as few as two cameras and two lighting configurations. Unlike previous methods for BRDF invariant stereo, LTC does not require precisely configured or calibrated light sources or calibration objects in the scene. Importantly, the new constraint can be used to provide BRDF invariance to any existing stereo method whenever appropriate lighting variation is available.

  19. Decision support systems for ecosystem management: An evaluation of existing systems

    Treesearch

    H. Todd Mowrer; Klaus Barber; Joe Campbell; Nick Crookston; Cathy Dahms; John Day; Jim Laacke; Jim Merzenich; Steve Mighton; Mike Rauscher; Rick Sojda; Joyce Thompson; Peter Trenchi; Mark Twery

    1997-01-01

    This report evaluated 24 computer-aided decision support systems (DSS) that can support management decision-making in forest ecosystems. It compares the scope of each system, spatial capabilities, computational methods, development status, input and output requirements, user support availability, and system performance. Questionnaire responses from the DSS developers (...

  20. Financing Lifelong Learning for All: An International Perspective. Working Paper.

    ERIC Educational Resources Information Center

    Burke, Gerald

    Recent international discussions provide information on various countries' responses to lifelong learning, including the following: (1) existing unmet needs and emerging needs for education and training; (2) funds required compared with what was provided; and (3) methods for acquiring additional funds, among them efficiency measures leading to…

  1. Curing conditions to inactivate Trichinella spiralis muscle larvae in ready-to-eat pork sausage

    USDA-ARS?s Scientific Manuscript database

    Curing processes for ready to eat (RTE) pork products currently require individual validation of methods to demonstrate inactivation of Trichinella spiralis. This is a major undertaking for each process; currently no model of meat chemistry exists that can be correlated with inactivation of Trichin...

  2. The "Protocenter" Concept: A Method for Teaching Stereochemistry

    ERIC Educational Resources Information Center

    Lewis, David E.

    2010-01-01

    The "protocenter", defined as an atom carrying two different attached groups in a nonlinear arrangement, is proposed as a concept useful for the introduction of chirality and geometric isomerism in introductory organic chemistry classes. Two protocenters are the minimum requirement for stereoisomers of a compound to exist. Protocenters may be…

  3. 40 CFR 63.11466 - What are the performance test requirements for new and existing sources?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (Appendix A-1) to select sampling port locations and the number of traverse points in each stack or duct... of the stack gas. (iii) Method 3, 3A, or 3B (Appendix A-2) to determine the dry molecular weight of...

  4. Self-adaptive method for high-frequency dispersion curve determination

    USDA-ARS?s Scientific Manuscript database

    When high-frequency (from 50 to 500 Hz) MASW is conducted to explore soil profile in the vadose zone, existing rules for selecting near offset and receiver spread length cannot satisfy the requirements of planar and dominant Rayleigh waves for all frequencies and will inevitably introduce near and f...

  5. Elicitation Support Requirements of Multi-Expertise Teams

    ERIC Educational Resources Information Center

    Bitter-Rijpkema, Marlies; Martens, Rob; Jochems, Wim

    2005-01-01

    Tools to support knowledge elicitation are used more and more in situations where employees or students collaborate using the computer. Studies indicate that differences exist between experts and novices regarding their methods of work and reasoning. However, the commonly preferred approach tends to deal with team members as a single system with…

  6. Food for thought ... A toxicology ontology roadmap.

    PubMed

    Hardy, Barry; Apic, Gordana; Carthew, Philip; Clark, Dominic; Cook, David; Dix, Ian; Escher, Sylvia; Hastings, Janna; Heard, David J; Jeliazkova, Nina; Judson, Philip; Matis-Mitchell, Sherri; Mitic, Dragana; Myatt, Glenn; Shah, Imran; Spjuth, Ola; Tcheremenskaia, Olga; Toldo, Luca; Watson, David; White, Andrew; Yang, Chihae

    2012-01-01

    Foreign substances can have a dramatic and unpredictable adverse effect on human health. In the development of new therapeutic agents, it is essential that the potential adverse effects of all candidates be identified as early as possible. The field of predictive toxicology strives to profile the potential for adverse effects of novel chemical substances before they occur, both with traditional in vivo experimental approaches and increasingly through the development of in vitro and computational methods which can supplement and reduce the need for animal testing. To be maximally effective, the field needs access to the largest possible knowledge base of previous toxicology findings, and such results need to be made available in such a fashion so as to be interoperable, comparable, and compatible with standard toolkits. This necessitates the development of open, public, computable, and standardized toxicology vocabularies and ontologies so as to support the applications required by in silico, in vitro, and in vivo toxicology methods and related analysis and reporting activities. Such ontology development will support data management, model building, integrated analysis, validation and reporting, including regulatory reporting and alternative testing submission requirements as required by guidelines such as the REACH legislation, leading to new scientific advances in a mechanistically-based predictive toxicology. Numerous existing ontology and standards initiatives can contribute to the creation of a toxicology ontology supporting the needs of predictive toxicology and risk assessment. Additionally, new ontologies are needed to satisfy practical use cases and scenarios where gaps currently exist. Developing and integrating these resources will require a well-coordinated and sustained effort across numerous stakeholders engaged in a public-private partnership. In this communication, we set out a roadmap for the development of an integrated toxicology ontology, harnessing existing resources where applicable. We describe the stakeholders' requirements analysis from the academic and industry perspectives, timelines, and expected benefits of this initiative, with a view to engagement with the wider community.

  7. Probabilistic Multi-Scale, Multi-Level, Multi-Disciplinary Analysis and Optimization of Engine Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2000-01-01

    Aircraft engines are assemblies of dynamically interacting components. Engine updates to keep present aircraft flying safely and engines for new aircraft are progressively required to operate in more demanding technological and environmental requirements. Designs to effectively meet those requirements are necessarily collections of multi-scale, multi-level, multi-disciplinary analysis and optimization methods and probabilistic methods are necessary to quantify respective uncertainties. These types of methods are the only ones that can formally evaluate advanced composite designs which satisfy those progressively demanding requirements while assuring minimum cost, maximum reliability and maximum durability. Recent research activities at NASA Glenn Research Center have focused on developing multi-scale, multi-level, multidisciplinary analysis and optimization methods. Multi-scale refers to formal methods which describe complex material behavior metal or composite; multi-level refers to integration of participating disciplines to describe a structural response at the scale of interest; multidisciplinary refers to open-ended for various existing and yet to be developed discipline constructs required to formally predict/describe a structural response in engine operating environments. For example, these include but are not limited to: multi-factor models for material behavior, multi-scale composite mechanics, general purpose structural analysis, progressive structural fracture for evaluating durability and integrity, noise and acoustic fatigue, emission requirements, hot fluid mechanics, heat-transfer and probabilistic simulations. Many of these, as well as others, are encompassed in an integrated computer code identified as Engine Structures Technology Benefits Estimator (EST/BEST) or Multi-faceted/Engine Structures Optimization (MP/ESTOP). The discipline modules integrated in MP/ESTOP include: engine cycle (thermodynamics), engine weights, internal fluid mechanics, cost, mission and coupled structural/thermal, various composite property simulators and probabilistic methods to evaluate uncertainty effects (scatter ranges) in all the design parameters. The objective of the proposed paper is to briefly describe a multi-faceted design analysis and optimization capability for coupled multi-discipline engine structures optimization. Results are presented for engine and aircraft type metrics to illustrate the versatility of that capability. Results are also presented for reliability, noise and fatigue to illustrate its inclusiveness. For example, replacing metal rotors with composites reduces the engine weight by 20 percent, 15 percent noise reduction, and an order of magnitude improvement in reliability. Composite designs exist to increase fatigue life by at least two orders of magnitude compared to state-of-the-art metals.

  8. Laser Spot Tracking Based on Modified Circular Hough Transform and Motion Pattern Analysis

    PubMed Central

    Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan

    2014-01-01

    Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas–Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development. PMID:25350502

  9. Laser spot tracking based on modified circular Hough transform and motion pattern analysis.

    PubMed

    Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan

    2014-10-27

    Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas-Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development.

  10. Survey on multisensory feedback virtual reality dental training systems.

    PubMed

    Wang, D; Li, T; Zhang, Y; Hou, J

    2016-11-01

    Compared with traditional dental training methods, virtual reality training systems integrated with multisensory feedback possess potentials advantages. However, there exist many technical challenges in developing a satisfactory simulator. In this manuscript, we systematically survey several current dental training systems to identify the gaps between the capabilities of these systems and the clinical training requirements. After briefly summarising the components, functions and unique features of each system, we discuss the technical challenges behind these systems including the software, hardware and user evaluation methods. Finally, the clinical requirements of an ideal dental training system are proposed. Future research/development areas are identified based on an analysis of the gaps between current systems and clinical training requirements. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Predictive local receptive fields based respiratory motion tracking for motion-adaptive radiotherapy.

    PubMed

    Yubo Wang; Tatinati, Sivanagaraja; Liyu Huang; Kim Jeong Hong; Shafiq, Ghufran; Veluvolu, Kalyana C; Khong, Andy W H

    2017-07-01

    Extracranial robotic radiotherapy employs external markers and a correlation model to trace the tumor motion caused by the respiration. The real-time tracking of tumor motion however requires a prediction model to compensate the latencies induced by the software (image data acquisition and processing) and hardware (mechanical and kinematic) limitations of the treatment system. A new prediction algorithm based on local receptive fields extreme learning machines (pLRF-ELM) is proposed for respiratory motion prediction. All the existing respiratory motion prediction methods model the non-stationary respiratory motion traces directly to predict the future values. Unlike these existing methods, the pLRF-ELM performs prediction by modeling the higher-level features obtained by mapping the raw respiratory motion into the random feature space of ELM instead of directly modeling the raw respiratory motion. The developed method is evaluated using the dataset acquired from 31 patients for two horizons in-line with the latencies of treatment systems like CyberKnife. Results showed that pLRF-ELM is superior to that of existing prediction methods. Results further highlight that the abstracted higher-level features are suitable to approximate the nonlinear and non-stationary characteristics of respiratory motion for accurate prediction.

  12. Foot and mouth disease vaccine strain selection: Current approaches and future perspectives.

    PubMed

    Mahapatra, Mana; Parida, Satya

    2018-06-27

    Lack of cross protection between foot and mouth disease (FMD) virus (FMDV) serotypes as well as incomplete protection between some subtypes of FMDV affect the application of vaccine in the field. Further, the emergence of new variant FMD viruses periodically makes the existing vaccine inefficient. Consequently, periodical vaccine strain selection either by in vivo methods or in vitro methods become an essential requirement to enable utilisation of appropriate and efficient vaccines. Areas covered: Here we describe the cross reactivity of the existing vaccines with the global pool of circulating viruses and the putative selected vaccine strains for targeting protection against the two major circulating serotype O and A FMD viruses for East Africa, the Middle East, South Asia and South East Asia. Expert Commentary: Although in vivo cross protection studies are more appropriate methods for vaccine matching and selection than in vitro neutralisation test or ELISA, in the face of an outbreak both in vivo and in vitro methods of vaccine matching are not easy, and time consuming. The FMDV capsid contains all the immunogenic epitopes, and therefore vaccine strain prediction models using both capsid sequence and serology data will likely replace existing tools in the future.

  13. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelt, Daniël M.; Gürsoy, Dogˇa; Palenstijn, Willem Jan

    2016-04-28

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it ismore » shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.« less

  14. Method and apparatus for measuring irradiated fuel profiles

    DOEpatents

    Lee, David M.

    1982-01-01

    A new apparatus is used to substantially instantaneously obtain a profile of an object, for example a spent fuel assembly, which profile (when normalized) has unexpectedly been found to be substantially identical to the normalized profile of the burnup monitor Cs-137 obtained with a germanium detector. That profile can be used without normalization in a new method of identifying and monitoring in order to determine for example whether any of the fuel has been removed. Alternatively, two other new methods involve calibrating that profile so as to obtain a determination of fuel burnup (which is important for complying with safeguards requirements, for utilizing fuel to an optimal extent, and for storing spent fuel in a minimal amount of space). Using either of these two methods of determining burnup, one can reduce the required measurement time significantly (by more than an order of magnitude) over existing methods, yet retain equal or only slightly reduced accuracy.

  15. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    PubMed Central

    Pelt, Daniël M.; Gürsoy, Doǧa; Palenstijn, Willem Jan; Sijbers, Jan; De Carlo, Francesco; Batenburg, Kees Joost

    2016-01-01

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy’s standard reconstruction method. PMID:27140167

  16. Malaria control in Tanzania

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yhdego, M.; Majura, P.

    A review of the malaria control programs and the problem encountered in the United Republic of Tanzania since 1945 to the year 1986 is discussed. Buguruni, one of the squatter areas in the city of Dar es Salaam, is chosen as a case study in order to evaluate the economic advantage of engineering methods for the control of malaria infection. Although the initial capital cost of engineering methods may be high, the cost effectiveness requires a much lower financial burden of only about Tshs. 3 million compared with the conventional methods of larviciding and insecticiding which requires more than Tshs.more » 10 million. Finally, recommendations for the adoption of engineering methods are made concerning the upgrading of existing roads and footpaths in general with particular emphasis on drainage of large pools of water which serve as breeding sites for mosquitoes.« less

  17. 46 CFR 154.12 - Existing gas vessel: Endorsements and requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... DANGEROUS CARGOES SAFETY STANDARDS FOR SELF-PROPELLED VESSELS CARRYING BULK LIQUEFIED GASES General § 154.12 Existing gas vessel: Endorsements and requirements. (a) Except an existing gas vessel under paragraph (b... 46 Shipping 5 2012-10-01 2012-10-01 false Existing gas vessel: Endorsements and requirements. 154...

  18. 46 CFR 154.12 - Existing gas vessel: Endorsements and requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... DANGEROUS CARGOES SAFETY STANDARDS FOR SELF-PROPELLED VESSELS CARRYING BULK LIQUEFIED GASES General § 154.12 Existing gas vessel: Endorsements and requirements. (a) Except an existing gas vessel under paragraph (b... 46 Shipping 5 2014-10-01 2014-10-01 false Existing gas vessel: Endorsements and requirements. 154...

  19. High-dose-rate prostate brachytherapy inverse planning on dose-volume criteria by simulated annealing.

    PubMed

    Deist, T M; Gorissen, B L

    2016-02-07

    High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  1. A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.

    PubMed

    Pagoulatos, N; Haynor, D R; Kim, Y

    2001-09-01

    We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.

  2. 49 CFR 223.15 - Requirements for existing passenger cars.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Requirements for existing passenger cars. 223.15... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION SAFETY GLAZING STANDARDS-LOCOMOTIVES, PASSENGER CARS AND CABOOSES Specific Requirements § 223.15 Requirements for existing passenger cars. (a) Passenger cars built or...

  3. 49 CFR 223.15 - Requirements for existing passenger cars.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 4 2014-10-01 2014-10-01 false Requirements for existing passenger cars. 223.15... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION SAFETY GLAZING STANDARDS-LOCOMOTIVES, PASSENGER CARS AND CABOOSES Specific Requirements § 223.15 Requirements for existing passenger cars. (a) Passenger cars built or...

  4. 49 CFR 223.15 - Requirements for existing passenger cars.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Requirements for existing passenger cars. 223.15... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION SAFETY GLAZING STANDARDS-LOCOMOTIVES, PASSENGER CARS AND CABOOSES Specific Requirements § 223.15 Requirements for existing passenger cars. (a) Passenger cars built or...

  5. 49 CFR 223.15 - Requirements for existing passenger cars.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false Requirements for existing passenger cars. 223.15... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION SAFETY GLAZING STANDARDS-LOCOMOTIVES, PASSENGER CARS AND CABOOSES Specific Requirements § 223.15 Requirements for existing passenger cars. (a) Passenger cars built or...

  6. 49 CFR 223.15 - Requirements for existing passenger cars.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false Requirements for existing passenger cars. 223.15... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION SAFETY GLAZING STANDARDS-LOCOMOTIVES, PASSENGER CARS AND CABOOSES Specific Requirements § 223.15 Requirements for existing passenger cars. (a) Passenger cars built or...

  7. Hand-eye calibration for rigid laparoscopes using an invariant point.

    PubMed

    Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2016-06-01

    Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.

  8. On the existence of global solutions of the one-dimensional cubic NLS for initial data in the modulation space Mp,q (R)

    NASA Astrophysics Data System (ADS)

    Chaichenets, Leonid; Hundertmark, Dirk; Kunstmann, Peer; Pattakos, Nikolaos

    2017-10-01

    We prove global existence for the one-dimensional cubic nonlinear Schrödinger equation in modulation spaces Mp,p‧ for p sufficiently close to 2. In contrast to known results, [9] and [14], our result requires no smallness condition on initial data. The proof adapts a splitting method inspired by work of Vargas-Vega, Hyakuna-Tsutsumi and Grünrock to the modulation space setting and exploits polynomial growth of the free Schrödinger group on modulation spaces.

  9. The potential of genetic algorithms for conceptual design of rotor systems

    NASA Technical Reports Server (NTRS)

    Crossley, William A.; Wells, Valana L.; Laananen, David H.

    1993-01-01

    The capabilities of genetic algorithms as a non-calculus based, global search method make them potentially useful in the conceptual design of rotor systems. Coupling reasonably simple analysis tools to the genetic algorithm was accomplished, and the resulting program was used to generate designs for rotor systems to match requirements similar to those of both an existing helicopter and a proposed helicopter design. This provides a comparison with the existing design and also provides insight into the potential of genetic algorithms in design of new rotors.

  10. Assessing Species Diversity Using Metavirome Data: Methods and Challenges.

    PubMed

    Herath, Damayanthi; Jayasundara, Duleepa; Ackland, David; Saeed, Isaam; Tang, Sen-Lin; Halgamuge, Saman

    2017-01-01

    Assessing biodiversity is an important step in the study of microbial ecology associated with a given environment. Multiple indices have been used to quantify species diversity, which is a key biodiversity measure. Measuring species diversity of viruses in different environments remains a challenge relative to measuring the diversity of other microbial communities. Metagenomics has played an important role in elucidating viral diversity by conducting metavirome studies; however, metavirome data are of high complexity requiring robust data preprocessing and analysis methods. In this review, existing bioinformatics methods for measuring species diversity using metavirome data are categorised broadly as either sequence similarity-dependent methods or sequence similarity-independent methods. The former includes a comparison of DNA fragments or assemblies generated in the experiment against reference databases for quantifying species diversity, whereas estimates from the latter are independent of the knowledge of existing sequence data. Current methods and tools are discussed in detail, including their applications and limitations. Drawbacks of the state-of-the-art method are demonstrated through results from a simulation. In addition, alternative approaches are proposed to overcome the challenges in estimating species diversity measures using metavirome data.

  11. Maneuver Planning for Conjunction Risk Mitigation with Ground-track Control Requirements

    NASA Technical Reports Server (NTRS)

    McKinley, David

    2008-01-01

    The planning of conjunction Risk Mitigation Maneuvers (RMM) in the presence of ground-track control requirements is analyzed. Past RMM planning efforts on the Aqua, Aura, and Terra spacecraft have demonstrated that only small maneuvers are available when ground-track control requirements are maintained. Assuming small maneuvers, analytical expressions for the effect of a given maneuver on conjunction geometry are derived. The analytical expressions are used to generate a large trade space for initial RMM design. This trade space represents a significant improvement in initial maneuver planning over existing methods that employ high fidelity maneuver models and propagation.

  12. Control optimization of a lifting body entry problem by an improved and a modified method of perturbation function. Ph.D. Thesis - Houston Univ.

    NASA Technical Reports Server (NTRS)

    Garcia, F., Jr.

    1974-01-01

    A study of the solution problem of a complex entry optimization was studied. The problem was transformed into a two-point boundary value problem by using classical calculus of variation methods. Two perturbation methods were devised. These methods attempted to desensitize the contingency of the solution of this type of problem on the required initial co-state estimates. Also numerical results are presented for the optimal solution resulting from a number of different initial co-states estimates. The perturbation methods were compared. It is found that they are an improvement over existing methods.

  13. Knowledge management for the protection of information in electronic medical records.

    PubMed

    Lea, Nathan; Hailes, Stephen; Austin, Tony; Kalra, Dipak

    2008-01-01

    This paper describes foundational work investigating the protection requirements of sensitive medical information, which is being stored more routinely in repository systems for electronic medical records. These systems have increasingly powerful sharing capabilities at the point of clinical care, in medical research and for clinical and managerial audit. The potential for sharing raises concerns about the protection of individual patient privacy and challenges the duty of confidentiality by which medical practitioners are ethically and legally bound. By analysing the protection requirements and discussing the need to apply policy-based controls to discrete items of medical information in a record, this paper suggests that this is a problem for which existing privacy management solutions are not sufficient or appropriate to the protection requirements. It proposes that a knowledge management approach is required and it introduces a new framework based on the knowledge management techniques now being used to manage electronic medical record data. The background, existing work in this area, initial investigation methods, results to date and discussion are presented, and the paper is concluded with the authors' comments on the ramifications of the work.

  14. Antimatter Requirements and Energy Costs for Near-Term Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Schmidt, G. R.; Gerrish, H. P.; Martin, J. J.; Smith, G. A.; Meyer, K. J.

    1999-01-01

    The superior energy density of antimatter annihilation has often been pointed to as the ultimate source of energy for propulsion. However, the limited capacity and very low efficiency of present-day antiproton production methods suggest that antimatter may be too costly to consider for near-term propulsion applications. We address this issue by assessing the antimatter requirements for six different types of propulsion concepts, including two in which antiprotons are used to drive energy release from combined fission/fusion. These requirements are compared against the capacity of both the current antimatter production infrastructure and the improved capabilities that could exist within the early part of next century. Results show that although it may be impractical to consider systems that rely on antimatter as the sole source of propulsive energy, the requirements for propulsion based on antimatter-assisted fission/fusion do fall within projected near-term production capabilities. In fact, a new facility designed solely for antiproton production but based on existing technology could feasibly support interstellar precursor missions and omniplanetary spaceflight with antimatter costs ranging up to $6.4 million per mission.

  15. Software Safety Risk in Legacy Safety-Critical Computer Systems

    NASA Technical Reports Server (NTRS)

    Hill, Janice; Baggs, Rhoda

    2007-01-01

    Safety-critical computer systems must be engineered to meet system and software safety requirements. For legacy safety-critical computer systems, software safety requirements may not have been formally specified during development. When process-oriented software safety requirements are levied on a legacy system after the fact, where software development artifacts don't exist or are incomplete, the question becomes 'how can this be done?' The risks associated with only meeting certain software safety requirements in a legacy safety-critical computer system must be addressed should such systems be selected as candidates for reuse. This paper proposes a method for ascertaining formally, a software safety risk assessment, that provides measurements for software safety for legacy systems which may or may not have a suite of software engineering documentation that is now normally required. It relies upon the NASA Software Safety Standard, risk assessment methods based upon the Taxonomy-Based Questionnaire, and the application of reverse engineering CASE tools to produce original design documents for legacy systems.

  16. Launch team training system

    NASA Technical Reports Server (NTRS)

    Webb, J. T.

    1988-01-01

    A new approach to the training, certification, recertification, and proficiency maintenance of the Shuttle launch team is proposed. Previous training approaches are first reviewed. Short term program goals include expanding current training methods, improving the existing simulation capability, and scheduling training exercises with the same priority as hardware tests. Long-term goals include developing user requirements which would take advantage of state-of-the-art tools and techniques. Training requirements for the different groups of people to be trained are identified, and future goals are outlined.

  17. Automated Simultaneous Assembly of Multistage Testlets for a High-Stakes Licensing Examination

    ERIC Educational Resources Information Center

    Breithaupt, Krista; Hare, Donovan R.

    2007-01-01

    Many challenges exist for high-stakes testing programs offering continuous computerized administration. The automated assembly of test questions to exactly meet content and other requirements, provide uniformity, and control item exposure can be modeled and solved by mixed-integer programming (MIP) methods. A case study of the computerized…

  18. What Facilitates and Impedes Collaborative Work during Higher Education Software Implementation Projects?

    ERIC Educational Resources Information Center

    Cramer, Sharon F.; Tetewsky, Sheldon J.; Marczynski, Kelly S.

    2010-01-01

    Implementations of new or major upgrades of existing student information systems require incorporation of new paradigms and the exchange of familiar routines for new methods. As a result, implementations are almost always time consuming and expensive. Many people in the field of information technology have identified some of the challenges faced…

  19. Treatment of Heroin Dependence: Effectiveness, Costs, and Benefits of Methadone Maintenance

    ERIC Educational Resources Information Center

    Schilling, Robert; Dornig, Katrina; Lungren, Lena

    2006-01-01

    Objectives: Social workers will increasingly be required to attend to the cost-effectiveness of practices, programs, and policies. In the area of substance abuse, there is little evidence to suggest that social workers' decisions are based on evidence of either effectiveness or costs. Method: This article provides an overview of existing evidence…

  20. 40 CFR 63.11162 - What are the standards and compliance requirements for existing sources?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... zinc dust, zinc chips, and/or other materials containing zinc. (3) 0.228 lb/hr from the vent for the... present) prior to any releases to the atmosphere. (ii) Method 2, 2A, 2C, 2D, 2F, or 2G (40 CFR part 60...

  1. 40 CFR 63.11162 - What are the standards and compliance requirements for existing sources?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... zinc dust, zinc chips, and/or other materials containing zinc. (3) 0.228 lb/hr from the vent for the... present) prior to any releases to the atmosphere. (ii) Method 2, 2A, 2C, 2D, 2F, or 2G (40 CFR part 60...

  2. 40 CFR 63.11162 - What are the standards and compliance requirements for existing sources?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... zinc dust, zinc chips, and/or other materials containing zinc. (3) 0.228 lb/hr from the vent for the... present) prior to any releases to the atmosphere. (ii) Method 2, 2A, 2C, 2D, 2F, or 2G (40 CFR part 60...

  3. Demulsification; industrial applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lissant, K.J.

    1983-01-01

    For scientists involved in the problems of selecting or designing demulsification programs. The author shows clearly why no pat formula exists to help out but does point out initial information required to start work. Theory. Testing. Demulsification of oil-in-water emulsions. Demulsification of water-in-oil emulsions. Demulsification of petroleum emulsions. Additional methods and areas in demulsification.

  4. Projecting Enrollment in Rural Schools: A Study of Three Vermont School Districts

    ERIC Educational Resources Information Center

    Grip, Richard S.

    2004-01-01

    Large numbers of rural districts have experienced sharp declines in enrollment, unlike their suburban counterparts. Accurate enrollment projections are required, whether a district needs to build new schools or consolidate existing ones. For school districts having more than 600 students, a quantitative method such as the Cohort-Survival Ratio…

  5. 40 CFR 280.21 - Upgrading of existing UST systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... sound and free of corrosion holes prior to installing the cathodic protection system; or (ii) The tank... for corrosion holes by conducting two (2) tightness tests that meet the requirements of § 280.43(c... operation of the cathodic protection system; or (iv) The tank is assessed for corrosion holes by a method...

  6. 40 CFR 280.21 - Upgrading of existing UST systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sound and free of corrosion holes prior to installing the cathodic protection system; or (ii) The tank... for corrosion holes by conducting two (2) tightness tests that meet the requirements of § 280.43(c... operation of the cathodic protection system; or (iv) The tank is assessed for corrosion holes by a method...

  7. A summary and evaluation of semi-empirical methods for the prediction of helicopter rotor noise

    NASA Technical Reports Server (NTRS)

    Pegg, R. J.

    1979-01-01

    Existing prediction techniques are compiled and described. The descriptions include input and output parameter lists, required equations and graphs, and the range of validity for each part of the prediction procedures. Examples are provided illustrating the analysis procedure and the degree of agreement with experimental results.

  8. Mapped Plot Patch Size Estimates

    Treesearch

    Paul C. Van Deusen

    2005-01-01

    This paper demonstrates that the mapped plot design is relatively easy to analyze and describes existing formulas for mean and variance estimators. New methods are developed for using mapped plots to estimate average patch size of condition classes. The patch size estimators require assumptions about the shape of the condition class, limiting their utility. They may...

  9. Requirements for Programming Languages in Computer-Based Instructional Systems.

    ERIC Educational Resources Information Center

    Zinn, Karl

    The author reviews the instructional programing languages which already exist and describes their methods of presentation, organization, and preparation. He recommends that all research and development projects remain flexible in their choice of programing language for a time yet. He suggests ways to adapt to specific uses and users, to exploit…

  10. Adaptive controller for volumetric display of neuroimaging studies

    NASA Astrophysics Data System (ADS)

    Bleiberg, Ben; Senseney, Justin; Caban, Jesus

    2014-03-01

    Volumetric display of medical images is an increasingly relevant method for examining an imaging acquisition as the prevalence of thin-slice imaging increases in clinical studies. Current mouse and keyboard implementations for volumetric control provide neither the sensitivity nor specificity required to manipulate a volumetric display for efficient reading in a clinical setting. Solutions to efficient volumetric manipulation provide more sensitivity by removing the binary nature of actions controlled by keyboard clicks, but specificity is lost because a single action may change display in several directions. When specificity is then further addressed by re-implementing hardware binary functions through the introduction of mode control, the result is a cumbersome interface that fails to achieve the revolutionary benefit required for adoption of a new technology. We address the specificity versus sensitivity problem of volumetric interfaces by providing adaptive positional awareness to the volumetric control device by manipulating communication between hardware driver and existing software methods for volumetric display of medical images. This creates a tethered effect for volumetric display, providing a smooth interface that improves on existing hardware approaches to volumetric scene manipulation.

  11. Human error mitigation initiative (HEMI) : summary report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.

    2004-11-01

    Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operationsmore » indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.« less

  12. Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors.

    PubMed

    Haghverdi, Laleh; Lun, Aaron T L; Morgan, Michael D; Marioni, John C

    2018-06-01

    Large-scale single-cell RNA sequencing (scRNA-seq) data sets that are produced in different laboratories and at different times contain batch effects that may compromise the integration and interpretation of the data. Existing scRNA-seq analysis methods incorrectly assume that the composition of cell populations is either known or identical across batches. We present a strategy for batch correction based on the detection of mutual nearest neighbors (MNNs) in the high-dimensional expression space. Our approach does not rely on predefined or equal population compositions across batches; instead, it requires only that a subset of the population be shared between batches. We demonstrate the superiority of our approach compared with existing methods by using both simulated and real scRNA-seq data sets. Using multiple droplet-based scRNA-seq data sets, we demonstrate that our MNN batch-effect-correction method can be scaled to large numbers of cells.

  13. Three dimensional scattering center imaging techniques

    NASA Technical Reports Server (NTRS)

    Younger, P. R.; Burnside, W. D.

    1991-01-01

    Two methods to image scattering centers in 3-D are presented. The first method uses 2-D images generated from Inverse Synthetic Aperture Radar (ISAR) measurements taken by two vertically offset antennas. This technique is shown to provide accurate 3-D imaging capability which can be added to an existing ISAR measurement system, requiring only the addition of a second antenna. The second technique uses target impulse responses generated from wideband radar measurements from three slightly different offset antennas. This technique is shown to identify the dominant scattering centers on a target in nearly real time. The number of measurements required to image a target using this technique is very small relative to traditional imaging techniques.

  14. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.

  15. Empirical gradient threshold technique for automated segmentation across image modalities and cell lines.

    PubMed

    Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M

    2015-10-01

    New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10-fold cross-validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross-validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time-sequence data sets, for a total of 17,479 images. This method is implemented as an open-source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  16. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    NASA Astrophysics Data System (ADS)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  17. Probabilistic framework for product design optimization and risk management

    NASA Astrophysics Data System (ADS)

    Keski-Rahkonen, J. K.

    2018-05-01

    Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.

  18. Statistical Methods Applied to Gamma-ray Spectroscopy Algorithms in Nuclear Security Missions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fagan, Deborah K.; Robinson, Sean M.; Runkle, Robert C.

    2012-10-01

    In a wide range of nuclear security missions, gamma-ray spectroscopy is a critical research and development priority. One particularly relevant challenge is the interdiction of special nuclear material for which gamma-ray spectroscopy supports the goals of detecting and identifying gamma-ray sources. This manuscript examines the existing set of spectroscopy methods, attempts to categorize them by the statistical methods on which they rely, and identifies methods that have yet to be considered. Our examination shows that current methods effectively estimate the effect of counting uncertainty but in many cases do not address larger sources of decision uncertainty—ones that are significantly moremore » complex. We thus explore the premise that significantly improving algorithm performance requires greater coupling between the problem physics that drives data acquisition and statistical methods that analyze such data. Untapped statistical methods, such as Bayes Modeling Averaging and hierarchical and empirical Bayes methods have the potential to reduce decision uncertainty by more rigorously and comprehensively incorporating all sources of uncertainty. We expect that application of such methods will demonstrate progress in meeting the needs of nuclear security missions by improving on the existing numerical infrastructure for which these analyses have not been conducted.« less

  19. Concepts of ‘personalization’ in personalized medicine: implications for economic evaluation

    PubMed Central

    Rogowski, Wolf; Payne, Katherine; Schnell-Inderst, Petra; Manca, Andrea; Rochau, Ursula; Jahn, Beate; Alagoz, Oguzhan; Leidl, Reiner; Siebert, Uwe

    2015-01-01

    Context This paper assesses if, and how, existing methods for economic evaluation are applicable to the evaluation of PM and if not, where extension to methods may be required. Method Structured workshop with a pre-defined group of experts (n=47), run using a modified nominal group technique. Workshop findings were recorded using extensive note taking and summarised using thematic data analysis. The workshop was complemented by structured literature searches. Results The key finding emerging from the workshop, using an economic perspective, was that two distinct, but linked, interpretations of the concept of PM exist (personalization by ‘physiology’ or ‘preferences’). These interpretations involve specific challenges for the design and conduct of economic evaluations. Existing evaluative (extra-welfarist) frameworks were generally considered appropriate for evaluating PM. When ‘personalization’ is viewed as using physiological biomarkers, challenges include: representing complex care pathways; representing spill-over effects; meeting data requirements such as evidence on heterogeneity; choosing appropriate time horizons for the value of further research in uncertainty analysis. When viewed as tailoring medicine to patient preferences, further work is needed regarding: revealed preferences, e.g. treatment (non)adherence; stated preferences, e.g. risk interpretation and attitude; consideration of heterogeneity in preferences; and the appropriate framework (welfarism vs. extra-welfarism) to incorporate non-health benefits. Conclusion Ideally, economic evaluations should take account of both interpretations of PM and consider physiology and preferences. It is important for decision makers to be cognizant of the issues involved with the economic evaluation of PM to appropriately interpret the evidence and target future research funding. PMID:25249200

  20. Project Cyclops: a Design Study of a System for Detecting Extraterrestrial Intelligent Life

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The requirements in hardware, manpower, time and funding to conduct a realistic effort aimed at detecting the existence of extraterrestrial intelligent life are examined. The methods used are limited to present or near term future state-of-the-art techniques. Subjects discussed include: (1) possible methods of contact, (2) communication by electromagnetic waves, (3) antenna array and system facilities, (4) antenna elements, (5) signal processing, (6) search strategy, and (7) radio and radar astronomy.

  1. Periodical capacity setting methods for make-to-order multi-machine production systems

    PubMed Central

    Altendorfer, Klaus; Hübl, Alexander; Jodlbauer, Herbert

    2014-01-01

    The paper presents different periodical capacity setting methods for make-to-order, multi-machine production systems with stochastic customer required lead times and stochastic processing times to improve service level and tardiness. These methods are developed as decision support when capacity flexibility exists, such as, a certain range of possible working hours a week for example. The methods differ in the amount of information used whereby all are based on the cumulated capacity demand at each machine. In a simulation study the methods’ impact on service level and tardiness is compared to a constant provided capacity for a single and a multi-machine setting. It is shown that the tested capacity setting methods can lead to an increase in service level and a decrease in average tardiness in comparison to a constant provided capacity. The methods using information on processing time and customer required lead time distribution perform best. The results found in this paper can help practitioners to make efficient use of their flexible capacity. PMID:27226649

  2. 40 CFR 165.87 - Design and capacity requirements for existing structures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... existing containment structure: (1) The containment structure must be constructed of steel, reinforced... existing structures. 165.87 Section 165.87 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Structures § 165.87 Design and capacity requirements for existing structures. (a) For all existing...

  3. 40 CFR 165.87 - Design and capacity requirements for existing structures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... existing containment structure: (1) The containment structure must be constructed of steel, reinforced... existing structures. 165.87 Section 165.87 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Structures § 165.87 Design and capacity requirements for existing structures. (a) For all existing...

  4. 40 CFR 165.87 - Design and capacity requirements for existing structures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... existing containment structure: (1) The containment structure must be constructed of steel, reinforced... existing structures. 165.87 Section 165.87 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Structures § 165.87 Design and capacity requirements for existing structures. (a) For all existing...

  5. 40 CFR 165.87 - Design and capacity requirements for existing structures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... existing containment structure: (1) The containment structure must be constructed of steel, reinforced... existing structures. 165.87 Section 165.87 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Structures § 165.87 Design and capacity requirements for existing structures. (a) For all existing...

  6. Preliminary Structural Sensitivity Study of Hypersonic Inflatable Aerodynamic Decelerator Using Probabilistic Methods

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.

    2014-01-01

    Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology validation via flighttesting. This paper explores the implementation of probabilistic methods in the sensitivity analysis of the structural response of a Hypersonic Inflatable Aerodynamic Decelerator (HIAD). HIAD architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during re-entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. In the example presented here, the structural parameters of an existing HIAD model have been varied to illustrate the design approach utilizing uncertainty-based methods. Surrogate models have been used to reduce computational expense several orders of magnitude. The suitability of the design is based on assessing variation in the resulting cone angle. The acceptable cone angle variation would rely on the aerodynamic requirements.

  7. Sample size determination for logistic regression on a logit-normal distribution.

    PubMed

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  8. Using self-organizing maps to infill missing data in hydro-meteorological time series from the Logone catchment, Lake Chad basin.

    PubMed

    Nkiaka, E; Nawaz, N R; Lovett, J C

    2016-07-01

    Hydro-meteorological data is an important asset that can enhance management of water resources. But existing data often contains gaps, leading to uncertainties and so compromising their use. Although many methods exist for infilling data gaps in hydro-meteorological time series, many of these methods require inputs from neighbouring stations, which are often not available, while other methods are computationally demanding. Computing techniques such as artificial intelligence can be used to address this challenge. Self-organizing maps (SOMs), which are a type of artificial neural network, were used for infilling gaps in a hydro-meteorological time series in a Sudano-Sahel catchment. The coefficients of determination obtained were all above 0.75 and 0.65 while the average topographic error was 0.008 and 0.02 for rainfall and river discharge time series, respectively. These results further indicate that SOMs are a robust and efficient method for infilling missing gaps in hydro-meteorological time series.

  9. Trade study: Liquid hydrogen transportation - Kennedy Space Center. [cost and operational effectivenss of shipping methods.

    NASA Technical Reports Server (NTRS)

    Gray, D. J.

    1978-01-01

    Cryogenic transportation methods for providing liquid hydrogen requirements are examined in support of shuttle transportation system launch operations at Kennedy Space Center, Florida, during the time frames 1982-1991 in terms of cost and operational effectiveness. Transportation methods considered included sixteen different options employing mobile semi-trailer tankers, railcars, barges and combinations of each method. The study concludes that the most effective method of delivering liquid hydrogen from the vendor production facility in New Orleans to Kennedy Space Center includes maximum utilization of existing mobile tankers and railcars supplemented by maximum capacity mobile tankers procured incrementally in accordance with shuttle launch rates actually achieved.

  10. Design of a steganographic virtual operating system

    NASA Astrophysics Data System (ADS)

    Ashendorf, Elan; Craver, Scott

    2015-03-01

    A steganographic file system is a secure file system whose very existence on a disk is concealed. Customarily, these systems hide an encrypted volume within unused disk blocks, slack space, or atop conventional encrypted volumes. These file systems are far from undetectable, however: aside from their ciphertext footprint, they require a software or driver installation whose presence can attract attention and then targeted surveillance. We describe a new steganographic operating environment that requires no visible software installation, launching instead from a concealed bootstrap program that can be extracted and invoked with a chain of common Unix commands. Our system conceals its payload within innocuous files that typically contain high-entropy data, producing a footprint that is far less conspicuous than existing methods. The system uses a local web server to provide a file system, user interface and applications through a web architecture.

  11. Variable importance in nonlinear kernels (VINK): classification of digitized histopathology.

    PubMed

    Ginsburg, Shoshana; Ali, Sahirzeeshan; Lee, George; Basavanhally, Ajay; Madabhushi, Anant

    2013-01-01

    Quantitative histomorphometry is the process of modeling appearance of disease morphology on digitized histopathology images via image-based features (e.g., texture, graphs). Due to the curse of dimensionality, building classifiers with large numbers of features requires feature selection (which may require a large training set) or dimensionality reduction (DR). DR methods map the original high-dimensional features in terms of eigenvectors and eigenvalues, which limits the potential for feature transparency or interpretability. Although methods exist for variable selection and ranking on embeddings obtained via linear DR schemes (e.g., principal components analysis (PCA)), similar methods do not yet exist for nonlinear DR (NLDR) methods. In this work we present a simple yet elegant method for approximating the mapping between the data in the original feature space and the transformed data in the kernel PCA (KPCA) embedding space; this mapping provides the basis for quantification of variable importance in nonlinear kernels (VINK). We show how VINK can be implemented in conjunction with the popular Isomap and Laplacian eigenmap algorithms. VINK is evaluated in the contexts of three different problems in digital pathology: (1) predicting five year PSA failure following radical prostatectomy, (2) predicting Oncotype DX recurrence risk scores for ER+ breast cancers, and (3) distinguishing good and poor outcome p16+ oropharyngeal tumors. We demonstrate that subsets of features identified by VINK provide similar or better classification or regression performance compared to the original high dimensional feature sets.

  12. Leveling data in geochemical mapping: scope of application, pros and cons of existing methods

    NASA Astrophysics Data System (ADS)

    Pereira, Benoît; Vandeuren, Aubry; Sonnet, Philippe

    2017-04-01

    Geochemical mapping successfully met a range of needs from mineral exploration to environmental management. In Europe and around the world numerous geochemical datasets already exist. These datasets may originate from geochemical mapping projects or from the collection of sample analyses requested by environmental protection regulatory bodies. Combining datasets can be highly beneficial for establishing geochemical maps with increased resolution and/or coverage area. However this practice requires assessing the equivalence between datasets and, if needed, applying data leveling to remove possible biases between datasets. In the literature, several procedures for assessing dataset equivalence and leveling data are proposed. Daneshfar & Cameron (1998) proposed a method for the leveling of two adjacent datasets while Pereira et al. (2016) proposed two methods for the leveling of datasets that contain records located within the same geographical area. Each discussed method requires its own set of assumptions (underlying populations of data, spatial distribution of data, etc.). Here we propose to discuss the scope of application, pros, cons and practical recommendations for each method. This work is illustrated with several case studies in Wallonia (Southern Belgium) and in Europe involving trace element geochemical datasets. References: Daneshfar, B. & Cameron, E. (1998), Leveling geochemical data between map sheets, Journal of Geochemical Exploration 63(3), 189-201. Pereira, B.; Vandeuren, A.; Govaerts, B. B. & Sonnet, P. (2016), Assessing dataset equivalence and leveling data in geochemical mapping, Journal of Geochemical Exploration 168, 36-48.

  13. High-sensitivity determination of Zn(II) and Cu(II) in vitro by fluorescence polarization

    NASA Astrophysics Data System (ADS)

    Thompson, Richard B.; Maliwal, Badri P.; Feliccia, Vincent; Fierke, Carol A.

    1998-04-01

    Recent work has suggested that free Cu(II) may play a role in syndromes such as Crohn's and Wilson's diseases, as well as being a pollutant toxic at low levels to shellfish and sheep. Similarly, Zn(II) has been implicated in some neural damage in the brain resulting from epilepsy and ischemia. Several high sensitivity methods exist for determining these ions in solution, including GFAAS, ICP-MS, ICP-ES, and electrochemical techniques. However, these techniques are generally slow and costly, require pretreatment of the sample, require complex instruments and skilled personnel, and are incapable of imaging at the cellular and subcellular level. To address these shortcomings we developed fluorescence polarization (anisotropy) biosensing methods for these ions which are very sensitivity, highly selective, require simple instrumentation and little pretreatment, and are inexpensive. Thus free Cu(II) or Zn(II) can be determined at picomolar levels by changes in fluorescence polarization, lifetime, or wavelength ratio using these methods; these techniques may be adapted to microscopy.

  14. 40 CFR 63.11147 - What are the standards and compliance requirements for existing sources not using batch copper...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... requirements for existing sources not using batch copper converters? 63.11147 Section 63.11147 Protection of... Hazardous Air Pollutants for Primary Copper Smelting Area Sources Standards and Compliance Requirements § 63.11147 What are the standards and compliance requirements for existing sources not using batch copper...

  15. 40 CFR 63.11147 - What are the standards and compliance requirements for existing sources not using batch copper...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... requirements for existing sources not using batch copper converters? 63.11147 Section 63.11147 Protection of... Hazardous Air Pollutants for Primary Copper Smelting Area Sources Standards and Compliance Requirements § 63.11147 What are the standards and compliance requirements for existing sources not using batch copper...

  16. 40 CFR 63.11602 - What are the performance test and compliance requirements for new and existing sources?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... compliance requirements for new and existing sources? 63.11602 Section 63.11602 Protection of Environment... Requirements § 63.11602 What are the performance test and compliance requirements for new and existing sources... compounds of cadmium, chromium, lead, or nickel to a process vessel or to the grinding and milling equipment...

  17. 40 CFR 63.11602 - What are the performance test and compliance requirements for new and existing sources?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... compliance requirements for new and existing sources? 63.11602 Section 63.11602 Protection of Environment... Requirements § 63.11602 What are the performance test and compliance requirements for new and existing sources... compounds of cadmium, chromium, lead, or nickel to a process vessel or to the grinding and milling equipment...

  18. 40 CFR 63.11602 - What are the performance test and compliance requirements for new and existing sources?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... compliance requirements for new and existing sources? 63.11602 Section 63.11602 Protection of Environment... Requirements § 63.11602 What are the performance test and compliance requirements for new and existing sources... compounds of cadmium, chromium, lead, or nickel to a process vessel or to the grinding and milling equipment...

  19. 40 CFR 63.11602 - What are the performance test and compliance requirements for new and existing sources?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... compliance requirements for new and existing sources? 63.11602 Section 63.11602 Protection of Environment... Requirements § 63.11602 What are the performance test and compliance requirements for new and existing sources... compounds of cadmium, chromium, lead, or nickel to a process vessel or to the grinding and milling equipment...

  20. 40 CFR 63.11602 - What are the performance test and compliance requirements for new and existing sources?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... compliance requirements for new and existing sources? 63.11602 Section 63.11602 Protection of Environment... Requirements § 63.11602 What are the performance test and compliance requirements for new and existing sources... compounds of cadmium, chromium, lead, or nickel to a process vessel or to the grinding and milling equipment...

  1. Natural Language Processing Methods and Systems for Biomedical Ontology Learning

    PubMed Central

    Liu, Kaihong; Hogan, William R.; Crowley, Rebecca S.

    2010-01-01

    While the biomedical informatics community widely acknowledges the utility of domain ontologies, there remain many barriers to their effective use. One important requirement of domain ontologies is that they must achieve a high degree of coverage of the domain concepts and concept relationships. However, the development of these ontologies is typically a manual, time-consuming, and often error-prone process. Limited resources result in missing concepts and relationships as well as difficulty in updating the ontology as knowledge changes. Methodologies developed in the fields of natural language processing, information extraction, information retrieval and machine learning provide techniques for automating the enrichment of an ontology from free-text documents. In this article, we review existing methodologies and developed systems, and discuss how existing methods can benefit the development of biomedical ontologies. PMID:20647054

  2. A two-dimensionally coincident second difference cosmic ray spike removal method for the fully automated processing of Raman spectra.

    PubMed

    Schulze, H Georg; Turner, Robin F B

    2014-01-01

    Charge-coupled device detectors are vulnerable to cosmic rays that can contaminate Raman spectra with positive going spikes. Because spikes can adversely affect spectral processing and data analyses, they must be removed. Although both hardware-based and software-based spike removal methods exist, they typically require parameter and threshold specification dependent on well-considered user input. Here, we present a fully automated spike removal algorithm that proceeds without requiring user input. It is minimally dependent on sample attributes, and those that are required (e.g., standard deviation of spectral noise) can be determined with other fully automated procedures. At the core of the method is the identification and location of spikes with coincident second derivatives along both the spectral and spatiotemporal dimensions of two-dimensional datasets. The method can be applied to spectra that are relatively inhomogeneous because it provides fairly effective and selective targeting of spikes resulting in minimal distortion of spectra. Relatively effective spike removal obtained with full automation could provide substantial benefits to users where large numbers of spectra must be processed.

  3. Self-supporting method; an alternative method for steel truss bridge element replacement

    NASA Astrophysics Data System (ADS)

    Arsyad, Muhammad; Sangadji, Senot; As'ad, Sholihin

    2017-11-01

    Steel truss bridge often requires replacement of its element due to serious damage caused by traffic accidents. This replacement is carried out using temporary supporting structure. It would be difficult when the available space for the temporary structure is quite limited and or the position of work is at a high elevation. The self-supporting method is proposed instead of temporary supporting structure. This paper will discuss an innovative method of bridge rehabilitation by utilizing the existing bridge structure. It requires such temporary connecting structure that installed on the existing bridge element, therefore, the forces during replacement process could be transferred to the bridge foundation directly. By taking the case on a steel truss bridge Jetis Salatiga which requires element replacement due to its damages on two main diagonals, a modeling is carried out to get a proper repair method. Structural analysis is conducted for three temporary connecting structure models: “I,” “V,” and triangular model. Stresses and translations that occur in the structure are used as constraints. Bridge bearings are modeled in two different modes: fixed-fixed system and fixed-free one. Temperature load is given in each condition to obtain the appropriate time for execution. The triangular model is chosen as the best one. In the fixed-fixed mode, this method can be carried out in a temperature range 27-28.8° C, while in fixed-free one, the temperature it is allowed between 27-43.4 °C. The D4 is dismantled first by cutting the D4 leaving an area of 1140.2 mm2 or 127 mm web length to enable plastic condition until the D4 collapses. At the beginning of elongation occurs, immediately performed a slowly jacking on a temporary connecting structure so that the force on D4 is gradually transferred to the temporary connecting structure then the D4 and D5 are set in their place.

  4. Genetic improvement of olive (Olea europaea L.) by conventional and in vitro biotechnology methods.

    PubMed

    Rugini, E; Cristofori, V; Silvestri, C

    2016-01-01

    In olive (Olea europaea L.) traditional methods of genetic improvement have up to now produced limited results. Intensification of olive growing requires appropriate new cultivars for fully mechanized groves, but among the large number of the traditional varieties very few are suitable. High-density and super high-density hedge row orchards require genotypes with reduced size, reduced apical dominance, a semi-erect growth habit, easy to propagate, resistant to abiotic and biotic stresses, with reliably high productivity and quality of both fruits and oil. Innovative strategies supported by molecular and biotechnological techniques are required to speed up novel hybridisation methods. Among traditional approaches the Gene Pool Method seems a reasonable option, but it requires availability of widely diverse germplasm from both cultivated and wild genotypes, supported by a detailed knowledge of their genetic relationships. The practice of "gene therapy" for the most important existing cultivars, combined with conventional methods, could accelerate achievement of the main goals, but efforts to overcome some technical and ideological obstacles are needed. The present review describes the benefits that olive and its products may obtain from genetic improvement using state of the art of conventional and unconventional methods, and includes progress made in the field of in vitro techniques. The uses of both traditional and modern technologies are discussed with recommendations. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Practical interior tomography with radial Hilbert filtering and a priori knowledge in a small round area.

    PubMed

    Tang, Shaojie; Yang, Yi; Tang, Xiangyang

    2012-01-01

    Interior tomography problem can be solved using the so-called differentiated backprojection-projection onto convex sets (DBP-POCS) method, which requires a priori knowledge within a small area interior to the region of interest (ROI) to be imaged. In theory, the small area wherein the a priori knowledge is required can be in any shape, but most of the existing implementations carry out the Hilbert filtering either horizontally or vertically, leading to a vertical or horizontal strip that may be across a large area in the object. In this work, we implement a practical DBP-POCS method with radial Hilbert filtering and thus the small area with the a priori knowledge can be roughly round (e.g., a sinus or ventricles among other anatomic cavities in human or animal body). We also conduct an experimental evaluation to verify the performance of this practical implementation. We specifically re-derive the reconstruction formula in the DBP-POCS fashion with radial Hilbert filtering to assure that only a small round area with the a priori knowledge be needed (namely radial DBP-POCS method henceforth). The performance of the practical DBP-POCS method with radial Hilbert filtering and a priori knowledge in a small round area is evaluated with projection data of the standard and modified Shepp-Logan phantoms simulated by computer, followed by a verification using real projection data acquired by a computed tomography (CT) scanner. The preliminary performance study shows that, if a priori knowledge in a small round area is available, the radial DBP-POCS method can solve the interior tomography problem in a more practical way at high accuracy. In comparison to the implementations of DBP-POCS method demanding the a priori knowledge in horizontal or vertical strip, the radial DBP-POCS method requires the a priori knowledge within a small round area only. Such a relaxed requirement on the availability of a priori knowledge can be readily met in practice, because a variety of small round areas (e.g., air-filled sinuses or fluid-filled ventricles among other anatomic cavities) exist in human or animal body. Therefore, the radial DBP-POCS method with a priori knowledge in a small round area is more feasible in clinical and preclinical practice.

  6. Professional Development Credits in Student Affairs Practice: A Method to Enhance Professionalism

    ERIC Educational Resources Information Center

    Dean, Laura A.; Woodard, Bobby R.; Cooper, Diane L.

    2007-01-01

    Many professions require members to be involved in continuing education in order to maintain skill levels, to remain current on issues, and often to remain licensed or certified in that field. The same level of continuous professional development does not always occur in student affairs, and there exists no system for documenting such activities…

  7. Keep Kids in School: A Collaborative Community Effort to Increase Compliance with State-Mandated Health Requirements

    ERIC Educational Resources Information Center

    Rogers, Valerie; Salzeider, Christine; Holzum, Laura; Milbrandt, Tracy; Zahnd, Whitney; Puczynski, Mark

    2016-01-01

    Background: It is important that collaborative relationships exist in a community to improve access to needed services for children. Such partnerships foster preventive services, such as immunizations, and other services that protect the health and well-being of all children. Methods: A collaborative relationship in Illinois involving an academic…

  8. Flow Boiling Critical Heat Flux in Reduced Gravity

    NASA Technical Reports Server (NTRS)

    Mudawar, Issam; Zhang, Hui; Hasan, Mohammad M.

    2004-01-01

    This study provides systematic method for reducing power consumption in reduced gravity systems by adopting minimum velocity required to provide adequate CHF and preclude detrimental effects of reduced gravity . This study proves it is possible to use existing 1 ge flow boiling and CHF correlations and models to design reduced gravity systems provided minimum velocity criteria are met

  9. 77 FR 64097 - Supplemental Environmental Impact Statement to the 2011 Final EIS for the Leasing and Underground...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-18

    ... Final EIS for the Leasing and Underground Mining of the Greens Hollow Federal Coal Lease Tract (UTU... Mining of the Greens Hollow Federal Coal Lease Tract UTU-84102. Supplemental analyses are required to... mining methods, with foreseeable access from existing adjacent leases. The Forest Service and BLM have...

  10. Will Courts Shape Value-Added Methods for Teacher Evaluation? ACT Working Paper Series. WP-2014-2

    ERIC Educational Resources Information Center

    Croft, Michelle; Buddin, Richard

    2014-01-01

    As more states begin to adopt teacher evaluation systems based on value-added measures, legal challenges have been filed both seeking to limit the use of value-added measures ("Cook v. Stewart") and others seeking to require more robust evaluation systems ("Vergara v. California"). This study reviews existing teacher evaluation…

  11. Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1997-01-01

    The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.

  12. A Self-Directed Method for Cell-Type Identification and Separation of Gene Expression Microarrays

    PubMed Central

    Zuckerman, Neta S.; Noam, Yair; Goldsmith, Andrea J.; Lee, Peter P.

    2013-01-01

    Gene expression analysis is generally performed on heterogeneous tissue samples consisting of multiple cell types. Current methods developed to separate heterogeneous gene expression rely on prior knowledge of the cell-type composition and/or signatures - these are not available in most public datasets. We present a novel method to identify the cell-type composition, signatures and proportions per sample without need for a-priori information. The method was successfully tested on controlled and semi-controlled datasets and performed as accurately as current methods that do require additional information. As such, this method enables the analysis of cell-type specific gene expression using existing large pools of publically available microarray datasets. PMID:23990767

  13. Training Requirements and Curriculum Content for Primary Care Providers Delivering Preventive Oral Health Services to Children Enrolled in Medicaid.

    PubMed

    Sams, Lattice D; Rozier, R Gary; Quinonez, Rocio B

    2016-07-01

    Despite the emphasis on delivery of preventive oral health services in non-dental settings, limited information exists about state Medicaid policies and strategies to educate practicing physicians in the delivery of these services. This study aims to determine: (1) training requirements and policies for reimbursement of oral health services, (2) teaching delivery methods used to train physicians, and (3) curricula content available to providers among states that reimburse non-dental providers for oral health services. Using Web-based Internet searches as the primary data source, and a supplemental e-mail survey of all states offering in-person training, we assessed training requirements, methods of delivery for training, and curriculum content for states with Medicaid reimbursement to primary care providers delivering preventive oral health services. RESULTS of descriptive analyses are presented for information collected and updated in 2014. Forty-two states provide training sessions or resources to providers, 34 requiring provider training before reimbursement for oral health services. Web-based training is the most common CME delivery method. Only small differences in curricular content were reported by the 11 states that use in-person didactic sessions as the delivery method. Although we found that most states require training and curricular content is similar, training was most often delivered using Web-based courses without any additional delivery methods. Research is needed to evaluate the impact of a mixture of training methods and other quality improvement methods on increased adoption and implementation of preventive oral health services in medical practices.

  14. Online Estimation of Allan Variance Coefficients Based on a Neural-Extended Kalman Filter

    PubMed Central

    Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao

    2015-01-01

    As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor. PMID:25625903

  15. A space-efficient algorithm for local similarities.

    PubMed

    Huang, X Q; Hardison, R C; Miller, W

    1990-10-01

    Existing dynamic-programming algorithms for identifying similar regions of two sequences require time and space proportional to the product of the sequence lengths. Often this space requirement is more limiting than the time requirement. We describe a dynamic-programming local-similarity algorithm that needs only space proportional to the sum of the sequence lengths. The method can also find repeats within a single long sequence. To illustrate the algorithm's potential, we discuss comparison of a 73,360 nucleotide sequence containing the human beta-like globin gene cluster and a corresponding 44,594 nucleotide sequence for rabbit, a problem well beyond the capabilities of other dynamic-programming software.

  16. Ensemble framework based real-time respiratory motion prediction for adaptive radiotherapy applications.

    PubMed

    Tatinati, Sivanagaraja; Nazarpour, Kianoush; Tech Ang, Wei; Veluvolu, Kalyana C

    2016-08-01

    Successful treatment of tumors with motion-adaptive radiotherapy requires accurate prediction of respiratory motion, ideally with a prediction horizon larger than the latency in radiotherapy system. Accurate prediction of respiratory motion is however a non-trivial task due to the presence of irregularities and intra-trace variabilities, such as baseline drift and temporal changes in fundamental frequency pattern. In this paper, to enhance the accuracy of the respiratory motion prediction, we propose a stacked regression ensemble framework that integrates heterogeneous respiratory motion prediction algorithms. We further address two crucial issues for developing a successful ensemble framework: (1) selection of appropriate prediction methods to ensemble (level-0 methods) among the best existing prediction methods; and (2) finding a suitable generalization approach that can successfully exploit the relative advantages of the chosen level-0 methods. The efficacy of the developed ensemble framework is assessed with real respiratory motion traces acquired from 31 patients undergoing treatment. Results show that the developed ensemble framework improves the prediction performance significantly compared to the best existing methods. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. Cryopreservation of Human Mesenchymal Stem Cells for Clinical Applications: Current Methods and Challenges.

    PubMed

    Yong, Kar Wey; Wan Safwani, Wan Kamarul Zaman; Xu, Feng; Wan Abas, Wan Abu Bakar; Choi, Jane Ru; Pingguan-Murphy, Belinda

    2015-08-01

    Mesenchymal stem cells (MSCs) hold many advantages over embryonic stem cells (ESCs) and other somatic cells in clinical applications. MSCs are multipotent cells with strong immunosuppressive properties. They can be harvested from various locations in the human body (e.g., bone marrow and adipose tissues). Cryopreservation represents an efficient method for the preservation and pooling of MSCs, to obtain the cell counts required for clinical applications, such as cell-based therapies and regenerative medicine. Upon cryopreservation, it is important to preserve MSCs functional properties including immunomodulatory properties and multilineage differentiation ability. Further, a biosafety evaluation of cryopreserved MSCs is essential prior to their clinical applications. However, the existing cryopreservation methods for MSCs are associated with notable limitations, leading to a need for new or improved methods to be established for a more efficient application of cryopreserved MSCs in stem cell-based therapies. We review the important parameters for cryopreservation of MSCs and the existing cryopreservation methods for MSCs. Further, we also discuss the challenges to be addressed in order to preserve MSCs effectively for clinical applications.

  18. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    NASA Astrophysics Data System (ADS)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  19. Controlling the Display of Capsule Endoscopy Video for Diagnostic Assistance

    NASA Astrophysics Data System (ADS)

    Vu, Hai; Echigo, Tomio; Sagawa, Ryusuke; Yagi, Keiko; Shiba, Masatsugu; Higuchi, Kazuhide; Arakawa, Tetsuo; Yagi, Yasushi

    Interpretations by physicians of capsule endoscopy image sequences captured over periods of 7-8 hours usually require 45 to 120 minutes of extreme concentration. This paper describes a novel method to reduce diagnostic time by automatically controlling the display frame rate. Unlike existing techniques, this method displays original images with no skipping of frames. The sequence can be played at a high frame rate in stable regions to save time. Then, in regions with rough changes, the speed is decreased to more conveniently ascertain suspicious findings. To realize such a system, cue information about the disparity of consecutive frames, including color similarity and motion displacements is extracted. A decision tree utilizes these features to classify the states of the image acquisitions. For each classified state, the delay time between frames is calculated by parametric functions. A scheme selecting the optimal parameters set determined from assessments by physicians is deployed. Experiments involved clinical evaluations to investigate the effectiveness of this method compared to a standard-view using an existing system. Results from logged action based analysis show that compared with an existing system the proposed method reduced diagnostic time to around 32.5 ± minutes per full sequence while the number of abnormalities found was similar. As well, physicians needed less effort because of the systems efficient operability. The results of the evaluations should convince physicians that they can safely use this method and obtain reduced diagnostic times.

  20. Enhancing healthcare process design with human factors engineering and reliability science, part 1: setting the context.

    PubMed

    Boston-Fleischhauer, Carol

    2008-01-01

    The design and implementation of efficient, effective, and safe processes are never-ending challenges in healthcare. Less than optimal performance levels and rising concerns about patient safety suggest that traditional process design methods are insufficient to meet design requirements. In this 2-part series, the author presents human factors engineering and reliability science as important knowledge to enhance existing operational and clinical process design methods in healthcare. An examination of these theories, application approaches, and examples are presented.

  1. Developing Best Practices for Capturing As-Built Building Information Models (BIM) for Existing Facilities

    DTIC Science & Technology

    2010-08-01

    students conducting the data capture and data entry, an analytical method known as the Task Load Index ( NASA TLX Version 2.0) was used. This method was...published by the NASA Ames Research Center in December 2003. The entire report can be found at: http://humansystems.arc.nasa.gov/groups/ TLX The...completion of each task in the survey process, surveyors were required to complete a NASA TLX form to report their assessment of the workload for

  2. Instructional Videos for Unsupervised Harvesting and Learning of Action Examples

    DTIC Science & Technology

    2014-11-03

    collection of image or video anno - tations has been tackled in different ways, but most existing methods still require a human in the loop. The...the views of ARO and NSF. 7. REFERENCES [1] C.-C. Chang and C.- J . Lin. LIBSVM: A library for support vector machines. In ACM Transactions on...feature encoding methods. In BMVC, 2011. [3] J . Chen, Y. Cui, G. Ye, D. Liu, and S.-F. Chang. Event-driven semantic concept discovery by exploiting

  3. Simultaneous Local Binary Feature Learning and Encoding for Homogeneous and Heterogeneous Face Recognition.

    PubMed

    Lu, Jiwen; Erin Liong, Venice; Zhou, Jie

    2017-08-09

    In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.

  4. Structural design using equilibrium programming formulations

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1995-01-01

    Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.

  5. A Radiation Solver for the National Combustion Code

    NASA Technical Reports Server (NTRS)

    Sockol, Peter M.

    2015-01-01

    A methodology is given that converts an existing finite volume radiative transfer method that requires input of local absorption coefficients to one that can treat a mixture of combustion gases and compute the coefficients on the fly from the local mixture properties. The Full-spectrum k-distribution method is used to transform the radiative transfer equation (RTE) to an alternate wave number variable, g . The coefficients in the transformed equation are calculated at discrete temperatures and participating species mole fractions that span the values of the problem for each value of g. These results are stored in a table and interpolation is used to find the coefficients at every cell in the field. Finally, the transformed RTE is solved for each g and Gaussian quadrature is used to find the radiant heat flux throughout the field. The present implementation is in an existing cartesian/cylindrical grid radiative transfer code and the local mixture properties are given by a solution of the National Combustion Code (NCC) on the same grid. Based on this work the intention is to apply this method to an existing unstructured grid radiation code which can then be coupled directly to NCC.

  6. Enhancing requirements engineering for patient registry software systems with evidence-based components.

    PubMed

    Lindoerfer, Doris; Mansmann, Ulrich

    2017-07-01

    Patient registries are instrumental for medical research. Often their structures are complex and their implementations use composite software systems to meet the wide spectrum of challenges. Commercial and open-source systems are available for registry implementation, but many research groups develop their own systems. Methodological approaches in the selection of software as well as the construction of proprietary systems are needed. We propose an evidence-based checklist, summarizing essential items for patient registry software systems (CIPROS), to accelerate the requirements engineering process. Requirements engineering activities for software systems follow traditional software requirements elicitation methods, general software requirements specification (SRS) templates, and standards. We performed a multistep procedure to develop a specific evidence-based CIPROS checklist: (1) A systematic literature review to build a comprehensive collection of technical concepts, (2) a qualitative content analysis to define a catalogue of relevant criteria, and (3) a checklist to construct a minimal appraisal standard. CIPROS is based on 64 publications and covers twelve sections with a total of 72 items. CIPROS also defines software requirements. Comparing CIPROS with traditional software requirements elicitation methods, SRS templates and standards show a broad consensus but differences in issues regarding registry-specific aspects. Using an evidence-based approach to requirements engineering for registry software adds aspects to the traditional methods and accelerates the software engineering process for registry software. The method we used to construct CIPROS serves as a potential template for creating evidence-based checklists in other fields. The CIPROS list supports developers in assessing requirements for existing systems and formulating requirements for their own systems, while strengthening the reporting of patient registry software system descriptions. It may be a first step to create standards for patient registry software system assessments. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Determination of laser cutting process conditions using the preference selection index method

    NASA Astrophysics Data System (ADS)

    Madić, Miloš; Antucheviciene, Jurgita; Radovanović, Miroslav; Petković, Dušan

    2017-03-01

    Determination of adequate parameter settings for improvement of multiple quality and productivity characteristics at the same time is of great practical importance in laser cutting. This paper discusses the application of the preference selection index (PSI) method for discrete optimization of the CO2 laser cutting of stainless steel. The main motivation for application of the PSI method is that it represents an almost unexplored multi-criteria decision making (MCDM) method, and moreover, this method does not require assessment of the considered criteria relative significances. After reviewing and comparing the existing approaches for determination of laser cutting parameter settings, the application of the PSI method was explained in detail. Experiment realization was conducted by using Taguchi's L27 orthogonal array. Roughness of the cut surface, heat affected zone (HAZ), kerf width and material removal rate (MRR) were considered as optimization criteria. The proposed methodology is found to be very useful in real manufacturing environment since it involves simple calculations which are easy to understand and implement. However, while applying the PSI method it was observed that it can not be useful in situations where there exist a large number of alternatives which have attribute values (performances) very close to those which are preferred.

  8. A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination.

    PubMed

    Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A

    2018-02-08

    Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods.

  9. A comparison of viscoelastic damping models

    NASA Technical Reports Server (NTRS)

    Slater, Joseph C.; Belvin, W. Keith; Inman, Daniel J.

    1993-01-01

    Modern finite element methods (FEM's) enable the precise modeling of mass and stiffness properties in what were in the past overwhelmingly large and complex structures. These models allow the accurate determination of natural frequencies and mode shapes. However, adequate methods for modeling highly damped and high frequency dependent structures did not exist until recently. The most commonly used method, Modal Strain Energy, does not correctly predict complex mode shapes since it is based on the assumption that the mode shapes of a structure are real. Recently, many techniques have been developed which allow the modeling of frequency dependent damping properties of materials in a finite element compatible form. Two of these methods, the Golla-Hughes-McTavish method and the Lesieutre-Mingori method, model the frequency dependent effects by adding coordinates to the existing system thus maintaining the linearity of the model. The third model, proposed by Bagley and Torvik, is based on the Fractional Calculus method and requires fewer empirical parameters to model the frequency dependence at the expense of linearity of the governing equations. This work examines the Modal Strain Energy, Golla-Hughes-McTavish and Bagley and Torvik models and compares them to determine the plausibility of using them for modeling viscoelastic damping in large structures.

  10. A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination

    PubMed Central

    2018-01-01

    Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods. PMID:29419759

  11. A study of methods to estimate debris flow velocity

    USGS Publications Warehouse

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.

    2008-01-01

    Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.

  12. 75 FR 48743 - Mandatory Reporting of Greenhouse Gases

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-11

    ...EPA is proposing to amend specific provisions in the GHG reporting rule to clarify certain provisions, to correct technical and editorial errors, and to address certain questions and issues that have arisen since promulgation. These proposed changes include providing additional information and clarity on existing requirements, allowing greater flexibility or simplified calculation methods for certain sources in a facility, amending data reporting requirements to provide additional clarity on when different types of GHG emissions need to be calculated and reported, clarifying terms and definitions in certain equations, and technical corrections.

  13. 75 FR 79091 - Mandatory Reporting of Greenhouse Gases

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-17

    ...EPA is amending specific provisions in the greenhouse gas reporting rule to clarify certain provisions, to correct technical and editorial errors, and to address certain questions and issues that have arisen since promulgation. These final changes include generally providing additional information and clarity on existing requirements, allowing greater flexibility or simplified calculation methods for certain sources, amending data reporting requirements to provide additional clarity on when different types of greenhouse gas emissions need to be calculated and reported, clarifying terms and definitions in certain equations and other technical corrections and amendments.

  14. Real-time Avatar Animation from a Single Image.

    PubMed

    Saragih, Jason M; Lucey, Simon; Cohn, Jeffrey F

    2011-01-01

    A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user's facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters.

  15. Real-time Avatar Animation from a Single Image

    PubMed Central

    Saragih, Jason M.; Lucey, Simon; Cohn, Jeffrey F.

    2014-01-01

    A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user’s facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters. PMID:24598812

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edgar, Thomas W.; Hadley, Mark D.; Manz, David O.

    This document provides the methods to secure routable control system communication in the electric sector. The approach of this document yields a long-term vision for a future of secure communication, while also providing near term steps and a roadmap. The requirements for the future secure control system environment were spelled out to provide a final target. Additionally a survey and evaluation of current protocols was used to determine if any existing technology could achieve this goal. In the end a four-step path was described that brought about increasing requirement completion and culminates in the realization of the long term vision.

  17. Electronic dental records: start taking the steps.

    PubMed

    Bergoff, Jana

    2011-01-01

    Converting paper patient records charts into their electronic counterparts (EDRs) not only has many advantages, but also could become a legal requirement in the future. Several steps key to a successful transition includes assessing the needs of the dental team and what they require as a part of the implementation Existing software and hardware must be evaluated for continued use and expansion. Proper protocols for information transfer must be established to ensure complete records while maintaining HIPAA regulations regarding patient privacy. Reduce anxiety by setting realistic dead-lines and using trusted back-up methods.

  18. A common-path optical coherence tomography based electrode for structural imaging of nerves and recording of action potentials

    NASA Astrophysics Data System (ADS)

    Islam, M. Shahidul; Haque, Md. Rezuanul; Oh, Christian M.; Wang, Yan; Park, B. Hyle

    2013-03-01

    Current technologies for monitoring neural activity either use different variety of electrodes (electrical recording) or require contrast agents introduced exogenously or through genetic modification (optical imaging). Here we demonstrate an optical method for non-contact and contrast agent free detection of nerve activity using phase-resolved optical coherence tomography (pr-OCT). A common-path variation of the pr-OCT is recently implemented and the developed system demonstrated the capability to detect rapid transient structural changes that accompany neural spike propagation. No averaging over multiple trials was required, indicating its capability of single-shot detection of individual impulses from functionally stimulated Limulus optic nerve. The strength of this OCT-based optical electrode is that it is a contactless method and does not require any exogenous contrast agent. With further improvements in accuracy and sensitivity, this optical electrode will play a complementary role to the existing recording technologies in future.

  19. Requirements for guidelines systems: implementation challenges and lessons from existing software-engineering efforts

    PubMed Central

    2012-01-01

    Background A large body of work in the clinical guidelines field has identified requirements for guideline systems, but there are formidable challenges in translating such requirements into production-quality systems that can be used in routine patient care. Detailed analysis of requirements from an implementation perspective can be useful in helping define sub-requirements to the point where they are implementable. Further, additional requirements emerge as a result of such analysis. During such an analysis, study of examples of existing, software-engineering efforts in non-biomedical fields can provide useful signposts to the implementer of a clinical guideline system. Methods In addition to requirements described by guideline-system authors, comparative reviews of such systems, and publications discussing information needs for guideline systems and clinical decision support systems in general, we have incorporated additional requirements related to production-system robustness and functionality from publications in the business workflow domain, in addition to drawing on our own experience in the development of the Proteus guideline system (http://proteme.org). Results The sub-requirements are discussed by conveniently grouping them into the categories used by the review of Isern and Moreno 2008. We cite previous work under each category and then provide sub-requirements under each category, and provide example of similar work in software-engineering efforts that have addressed a similar problem in a non-biomedical context. Conclusions When analyzing requirements from the implementation viewpoint, knowledge of successes and failures in related software-engineering efforts can guide implementers in the choice of effective design and development strategies. PMID:22405400

  20. A Fast and Effective Pyridine-Free Method for the Determination of Hydroxyl Value of Hydroxyl-Terminated Polybutadiene and Other Hydroxy Compounds

    NASA Astrophysics Data System (ADS)

    Alex, Ancy Smitha; Kumar, Vijendra; Sekkar, V.; Bandyopadhyay, G. G.

    2017-07-01

    Hydroxyl-terminated polybutadiene (HTPB) is the workhorse propellant binder for launch vehicle and missile applications. Accurate determination of the hydroxyl value (OHV) of HTPB is crucial for tailoring the ultimate mechanical and ballistic properties of the propellant derived. This article describes a fast and effective methodology free of pyridine based on acetic anhydride, N-methyl imidazole, and toluene for the determination of OHV of nonpolar polymers like HTPB and other hydroxyl compounds. This method gives accurate and reproducible results comparable to standard methods and is superior to existing methods in terms of user friendliness, efficiency, and time requirement.

  1. Visual and tactile interfaces for bi-directional human robot communication

    NASA Astrophysics Data System (ADS)

    Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin

    2013-05-01

    Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.

  2. Object-oriented Approach to High-level Network Monitoring and Management

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    2000-01-01

    An absolute prerequisite for the management of large investigating methods to build high-level monitoring computer networks is the ability to measure their systems that are built on top of existing monitoring performance. Unless we monitor a system, we cannot tools. Due to the heterogeneous nature of the hope to manage and control its performance. In this underlying systems at NASA Langley Research Center, paper, we describe a network monitoring system that we use an object-oriented approach for the design, we are currently designing and implementing. Keeping, first, we use UML (Unified Modeling Language) to in mind the complexity of the task and the required model users' requirements. Second, we identify the flexibility for future changes, we use an object-oriented existing capabilities of the underlying monitoring design methodology. The system is built using the system. Third, we try to map the former with the latter. APIs offered by the HP OpenView system.

  3. Leveraging standards to support patient-centric interdisciplinary plans of care.

    PubMed

    Dykes, Patricia C; DaDamio, Rebecca R; Goldsmith, Denise; Kim, Hyeon-eui; Ohashi, Kumiko; Saba, Virginia K

    2011-01-01

    As health care systems and providers move towards meaningful use of electronic health records, the once distant vision of collaborative patient-centric, interdisciplinary plans of care, generated and updated across organizations and levels of care, may soon become a reality. Effective care planning is included in the proposed Stages 2-3 Meaningful Use quality measures. To facilitate interoperability, standardization of plan of care messaging, content, information and terminology models are needed. This degree of standardization requires local and national coordination. The purpose of this paper is to review some existing standards that may be leveraged to support development of interdisciplinary patient-centric plans of care. Standards are then applied to a use case to demonstrate one method for achieving patient-centric and interoperable interdisciplinary plan of care documentation. Our pilot work suggests that existing standards provide a foundation for adoption and implementation of patient-centric plans of care that are consistent with federal requirements.

  4. Grain formation in astronomical systems: A critical review of condensation processes

    NASA Technical Reports Server (NTRS)

    Donn, B.

    1978-01-01

    An analysis is presented of the assumption and the applicability of the three theoretical methods for calculating condensations in cosmic clouds where no pre-existing nuclei exist. The three procedures are: thermodynamic equilibrium calculations, nucleation theory, and a kinetic treatment which would take into account the characteristics of each individual collision. Thermodynamics provide detailed results on the composition temperature and composition of the condensate provided the system attains equilibrium. Because of the cosmic abundance mixture of elements, large supersaturations in some cases and low pressures, equilibrium is not expected in astronomical clouds. Nucleation theory, a combination of thermodynamics and kinetics, has the limitations of each scheme. Kinetics, not requiring equilibrium, avoids nearly all the thermodynamics difficulties but requires detailed knowledge of many reactions which thermodynamics avoids. It appears to be the only valid way to treat grain formation in space. A review of experimental studies is given.

  5. Virtual Design Method for Controlled Failure in Foldcore Sandwich Panels

    NASA Astrophysics Data System (ADS)

    Sturm, Ralf; Fischer, S.

    2015-12-01

    For certification, novel fuselage concepts have to prove equivalent crashworthiness standards compared to the existing metal reference design. Due to the brittle failure behaviour of CFRP this requirement can only be fulfilled by a controlled progressive crash kinematics. Experiments showed that the failure of a twin-walled fuselage panel can be controlled by a local modification of the core through-thickness compression strength. For folded cores the required change in core properties can be integrated by a modification of the fold pattern. However, the complexity of folded cores requires a virtual design methodology for tailoring the fold pattern according to all static and crash relevant requirements. In this context a foldcore micromodel simulation method is presented to identify the structural response of a twin-walled fuselage panels with folded core under crash relevant loading condition. The simulations showed that a high degree of correlation is required before simulation can replace expensive testing. In the presented studies, the necessary correlation quality could only be obtained by including imperfections of the core material in the micromodel simulation approach.

  6. 40 CFR 165.87 - Design and capacity requirements for existing structures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Design and capacity requirements for... Structures § 165.87 Design and capacity requirements for existing structures. (a) For all existing... concrete or other rigid material capable of withstanding the full hydrostatic head, load and impact of any...

  7. A unified framework for unraveling the functional interaction structure of a biomolecular network based on stimulus-response experimental data.

    PubMed

    Cho, Kwang-Hyun; Choo, Sang-Mok; Wellstead, Peter; Wolkenhauer, Olaf

    2005-08-15

    We propose a unified framework for the identification of functional interaction structures of biomolecular networks in a way that leads to a new experimental design procedure. In developing our approach, we have built upon previous work. Thus we begin by pointing out some of the restrictions associated with existing structure identification methods and point out how these restrictions may be eased. In particular, existing methods use specific forms of experimental algebraic equations with which to identify the functional interaction structure of a biomolecular network. In our work, we employ an extended form of these experimental algebraic equations which, while retaining their merits, also overcome some of their disadvantages. Experimental data are required in order to estimate the coefficients of the experimental algebraic equation set associated with the structure identification task. However, experimentalists are rarely provided with guidance on which parameters to perturb, and to what extent, to perturb them. When a model of network dynamics is required then there is also the vexed question of sample rate and sample time selection to be resolved. Supplying some answers to these questions is the main motivation of this paper. The approach is based on stationary and/or temporal data obtained from parameter perturbations, and unifies the previous approaches of Kholodenko et al. (PNAS 99 (2002) 12841-12846) and Sontag et al. (Bioinformatics 20 (2004) 1877-1886). By way of demonstration, we apply our unified approach to a network model which cannot be properly identified by existing methods. Finally, we propose an experiment design methodology, which is not limited by the amount of parameter perturbations, and illustrate its use with an in numero example.

  8. An efficient genome-wide association test for multivariate phenotypes based on the Fisher combination function.

    PubMed

    Yang, James J; Li, Jia; Williams, L Keoki; Buu, Anne

    2016-01-05

    In genome-wide association studies (GWAS) for complex diseases, the association between a SNP and each phenotype is usually weak. Combining multiple related phenotypic traits can increase the power of gene search and thus is a practically important area that requires methodology work. This study provides a comprehensive review of existing methods for conducting GWAS on complex diseases with multiple phenotypes including the multivariate analysis of variance (MANOVA), the principal component analysis (PCA), the generalizing estimating equations (GEE), the trait-based association test involving the extended Simes procedure (TATES), and the classical Fisher combination test. We propose a new method that relaxes the unrealistic independence assumption of the classical Fisher combination test and is computationally efficient. To demonstrate applications of the proposed method, we also present the results of statistical analysis on the Study of Addiction: Genetics and Environment (SAGE) data. Our simulation study shows that the proposed method has higher power than existing methods while controlling for the type I error rate. The GEE and the classical Fisher combination test, on the other hand, do not control the type I error rate and thus are not recommended. In general, the power of the competing methods decreases as the correlation between phenotypes increases. All the methods tend to have lower power when the multivariate phenotypes come from long tailed distributions. The real data analysis also demonstrates that the proposed method allows us to compare the marginal results with the multivariate results and specify which SNPs are specific to a particular phenotype or contribute to the common construct. The proposed method outperforms existing methods in most settings and also has great applications in GWAS on complex diseases with multiple phenotypes such as the substance abuse disorders.

  9. 42 CFR 457.440 - Existing comprehensive State-based coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Existing comprehensive State-based coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.440 Existing comprehensive State-based coverage. (a) General requirements. Existing comprehensive State-based health benefits is coverage that— (1...

  10. Localization of small arms fire using acoustic measurements of muzzle blast and/or ballistic shock wave arrivals.

    PubMed

    Lo, Kam W; Ferguson, Brian G

    2012-11-01

    The accurate localization of small arms fire using fixed acoustic sensors is considered. First, the conventional wavefront-curvature passive ranging method, which requires only differential time-of-arrival (DTOA) measurements of the muzzle blast wave to estimate the source position, is modified to account for sensor positions that are not strictly collinear (bowed array). Second, an existing single-sensor-node ballistic model-based localization method, which requires both DTOA and differential angle-of-arrival (DAOA) measurements of the muzzle blast wave and ballistic shock wave, is improved by replacing the basic external ballistics model (which describes the bullet's deceleration along its trajectory) with a more rigorous model and replacing the look-up table ranging procedure with a nonlinear (or polynomial) equation-based ranging procedure. Third, a new multiple-sensor-node ballistic model-based localization method, which requires only DTOA measurements of the ballistic shock wave to localize the point of fire, is formulated. The first method is applicable to situations when only the muzzle blast wave is received, whereas the third method applies when only the ballistic shock wave is received. The effectiveness of each of these methods is verified using an extensive set of real data recorded during a 7 day field experiment.

  11. Fast and flexible selection with a single switch.

    PubMed

    Broderick, Tamara; MacKay, David J C

    2009-10-22

    Selection methods that require only a single-switch input, such as a button click or blink, are potentially useful for individuals with motor impairments, mobile technology users, and individuals wishing to transmit information securely. We present a single-switch selection method, "Nomon," that is general and efficient. Existing single-switch selection methods require selectable options to be arranged in ways that limit potential applications. By contrast, traditional operating systems, web browsers, and free-form applications (such as drawing) place options at arbitrary points on the screen. Nomon, however, has the flexibility to select any point on a screen. Nomon adapts automatically to an individual's clicking ability; it allows a person who clicks precisely to make a selection quickly and allows a person who clicks imprecisely more time to make a selection without error. Nomon reaps gains in information rate by allowing the specification of beliefs (priors) about option selection probabilities and by avoiding tree-based selection schemes in favor of direct (posterior) inference. We have developed both a Nomon-based writing application and a drawing application. To evaluate Nomon's performance, we compared the writing application with a popular existing method for single-switch writing (row-column scanning). Novice users wrote 35% faster with the Nomon interface than with the scanning interface. An experienced user (author TB, with 10 hours practice) wrote at speeds of 9.3 words per minute with Nomon, using 1.2 clicks per character and making no errors in the final text.

  12. Determining a carbohydrate profile for Hansenula polymorpha

    NASA Technical Reports Server (NTRS)

    Petersen, G. R.

    1985-01-01

    The determination of the levels of carbohydrates in the yeast Hansenula polymorpha required the development of new analytical procedures. Existing fractionation and analytical methods were adapted to deal with the problems involved with the lysis of whole cells. Using these new procedures, the complete carbohydrate profiles of H. polymorpha and selected mutant strains were determined and shown to correlate favourably with previously published results.

  13. Some Current Trends Toward Solving Curriculum Problems in English Education--K-12.

    ERIC Educational Resources Information Center

    Armstrong, Leon R.; And Others

    This guide is intended to assist curriculum committees and instructors by defining some existing problems in language arts curricula (K-12) and by offering insights into a few major curricula in use around the country. Sections deal with (1) reasons for and methods of effecting curriculum change, (2) requirements to be made of students, (3) the…

  14. Cooling devices and methods for use with electric submersible pumps

    DOEpatents

    Jankowski, Todd A; Hill, Dallas D

    2014-12-02

    Cooling devices for use with electric submersible pump motors include a refrigerator attached to the end of the electric submersible pump motor with the evaporator heat exchanger accepting all or a portion of the heat load from the motor. The cooling device can be a self-contained bolt-on unit, so that minimal design changes to existing motors are required.

  15. Cooling devices and methods for use with electric submersible pumps

    DOEpatents

    Jankowski, Todd A.; Hill, Dallas D.

    2016-07-19

    Cooling devices for use with electric submersible pump motors include a refrigerator attached to the end of the electric submersible pump motor with the evaporator heat exchanger accepting all or a portion of the heat load from the motor. The cooling device can be a self-contained bolt-on unit, so that minimal design changes to existing motors are required.

  16. A Product Analysis Method and Its Staging to Develop Redesign Competences

    ERIC Educational Resources Information Center

    Hansen, Claus Thorp; Lenau, Torben Anker

    2013-01-01

    Most product development work in industrial practice is incremental, i.e., the company has had a product in production and on the market for some time, and now time has come to design an upgraded variant. This type of redesign project requires that the engineering designers have competences to carry through an analysis of the existing product…

  17. Ready or Not? Assessing Change Readiness for Implementation of the Geospatial Technology Competency Model[c

    ERIC Educational Resources Information Center

    Annulis, Heather M.; Gaudet, Cyndi H.

    2007-01-01

    A shortage of a qualified and skilled workforce exists to meet the demands of the geospatial industry (NASA, 2002). Solving today's workforce issues requires new and innovative methods and techniques for this high growth, high technology industry. One tool to support workforce development is a competency model which can be used to build a…

  18. Predicting internal red oak (Quercus rubra) log defect features using surface defect defect measurements

    Treesearch

    R. Edward Thomas

    2013-01-01

    Determining the defects located within a log is crucial to understanding the tree/log resource for efficient processing. However, existing means of doing this non-destructively requires the use of expensive x-ray/CT (computerized tomography), MRI (magnetic resonance imaging), or microwave technology. These methods do not lend themselves to fast, efficient, and cost-...

  19. Predicting internal yellow-poplar log defect features using surface indicators

    Treesearch

    R. Edward Thomas

    2008-01-01

    Determining the defects that are located within the log is crucial to understanding the tree/log resource for efficient processing. However, existing means of doing this non-destructively requires the use of expensive X-ray/CT, MRI, or microwave technology. These methods do not lend themselves to fast, efficient, and cost-effective analysis of logs and tree stems in...

  20. 40 CFR 63.344 - Performance test requirements and test methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... pressure as follows: (i) Locate a velocity traverse port in a section of straight duct that connects the hooding on the plating tank or tanks with the control device. The port shall be located as close to the..., appendix A). If 2.5 diameters of straight duct work does not exist, locate the port 0.8 of the duct...

  1. 40 CFR 63.344 - Performance test requirements and test methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... pressure as follows: (i) Locate a velocity traverse port in a section of straight duct that connects the hooding on the plating tank or tanks with the control device. The port shall be located as close to the..., appendix A). If 2.5 diameters of straight duct work does not exist, locate the port 0.8 of the duct...

  2. Minimal-Approximation-Based Decentralized Backstepping Control of Interconnected Time-Delay Systems.

    PubMed

    Choi, Yun Ho; Yoo, Sung Jin

    2016-12-01

    A decentralized adaptive backstepping control design using minimal function approximators is proposed for nonlinear large-scale systems with unknown unmatched time-varying delayed interactions and unknown backlash-like hysteresis nonlinearities. Compared with existing decentralized backstepping methods, the contribution of this paper is to design a simple local control law for each subsystem, consisting of an actual control with one adaptive function approximator, without requiring the use of multiple function approximators and regardless of the order of each subsystem. The virtual controllers for each subsystem are used as intermediate signals for designing a local actual control at the last step. For each subsystem, a lumped unknown function including the unknown nonlinear terms and the hysteresis nonlinearities is derived at the last step and is estimated by one function approximator. Thus, the proposed approach only uses one function approximator to implement each local controller, while existing decentralized backstepping control methods require the number of function approximators equal to the order of each subsystem and a calculation of virtual controllers to implement each local actual controller. The stability of the total controlled closed-loop system is analyzed using the Lyapunov stability theorem.

  3. Improving Quality and Reducing Waste in Allied Health Workplace Education Programs: A Pragmatic Operational Education Framework Approach.

    PubMed

    Golder, Janet; Farlie, Melanie K; Sevenhuysen, Samantha

    2016-01-01

    Efficient utilisation of education resources is required for the delivery of effective learning opportunities for allied health professionals. This study aimed to develop an education framework to support delivery of high-quality education within existing education resources. This study was conducted in a large metropolitan health service. Homogenous and purposive sampling methods were utilised in Phase 1 (n=43) and 2 (n=14) consultation stages. Participants included 25 allied health professionals, 22 managers, 1 educator, and 3 executives. Field notes taken during 43 semi-structured interviews and 4 focus groups were member-checked, and semantic thematic analysis methods were utilised. Framework design was informed by existing published framework development guides. The framework model contains governance, planning, delivery, and evaluation and research elements and identifies performance indicators, practice examples, and support tools for a range of stakeholders. Themes integrated into framework content include improving quality of education and training provided and delivery efficiency, greater understanding of education role requirements, and workforce support for education-specific knowledge and skill development. This framework supports efficient delivery of allied health workforce education and training to the highest standard, whilst pragmatically considering current allied health education workforce demands.

  4. Locally refined block-centred finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  5. Locally refined block-centered finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling and predictions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  6. Predicting Human Protein Subcellular Locations by the Ensemble of Multiple Predictors via Protein-Protein Interaction Network with Edge Clustering Coefficients

    PubMed Central

    Du, Pufeng; Wang, Lusheng

    2014-01-01

    One of the fundamental tasks in biology is to identify the functions of all proteins to reveal the primary machinery of a cell. Knowledge of the subcellular locations of proteins will provide key hints to reveal their functions and to understand the intricate pathways that regulate biological processes at the cellular level. Protein subcellular location prediction has been extensively studied in the past two decades. A lot of methods have been developed based on protein primary sequences as well as protein-protein interaction network. In this paper, we propose to use the protein-protein interaction network as an infrastructure to integrate existing sequence based predictors. When predicting the subcellular locations of a given protein, not only the protein itself, but also all its interacting partners were considered. Unlike existing methods, our method requires neither the comprehensive knowledge of the protein-protein interaction network nor the experimentally annotated subcellular locations of most proteins in the protein-protein interaction network. Besides, our method can be used as a framework to integrate multiple predictors. Our method achieved 56% on human proteome in absolute-true rate, which is higher than the state-of-the-art methods. PMID:24466278

  7. Use of lidar point cloud data to support estimation of residual trace metals stored in mine chat piles in the Old Lead Belt of southeastern, Missouri

    USGS Publications Warehouse

    Witt, Emitt C.

    2016-01-01

    Historic lead and zinc (Pb-Zn) mining in southeast Missouri’s ―Old Lead Belt‖ has left large chat piles dominating the landscape where prior to 1972 mining was the major industry of the region. As a result of variable beneficiation methods over the history of mining activity, these piles remain with large quantities of unrecovered Pb and Zn and to a lesser extent cadmium (Cd). Quantifying the residual content of trace metals in chat piles is problematic because of the extensive field effort that must go into collecting elevation points for volumetric analysis. This investigation demonstrates that publicly available lidar point data from the U.S. Geological Survey 3D Elevation Program (3DEP) can be used to effectively calculate chat pile volumes as a method of more accurately estimating the total residual trace metal content in these mining wastes. Five chat piles located in St. Francois County, Missouri, were quantified for residual trace metal content. Utilizing lidar point cloud data collected in 2011 and existing trace metal concentration data obtained during remedial investigations, residual content of these chat piles ranged from 9247 to 88,579 metric tons Pb, 1925 to 52,306 metric tons Zn, and 51 to 1107 metric tons Cd. Development of new beneficiation methods for recovering these constituents from chat piles would need to achieve current Federal soil screening standards. To achieve this for the five chat piles investigated, 42 to 72% of residual Pb would require mitigation to the 1200 mg/kg Federal non-playground standard, 88 to 98% of residual Zn would require mitigation to the Ecological Soil Screening level (ESSL) for plant life, and 70% to 98% of Cd would require mitigation to achieve the ESSL. Achieving these goals through an existing or future beneficiation method(s) would remediate chat to a trace metal concentration level that would support its use as a safe agricultural soil amendment.

  8. Spatial modelling of disease using data- and knowledge-driven approaches.

    PubMed

    Stevens, Kim B; Pfeiffer, Dirk U

    2011-09-01

    The purpose of spatial modelling in animal and public health is three-fold: describing existing spatial patterns of risk, attempting to understand the biological mechanisms that lead to disease occurrence and predicting what will happen in the medium to long-term future (temporal prediction) or in different geographical areas (spatial prediction). Traditional methods for temporal and spatial predictions include general and generalized linear models (GLM), generalized additive models (GAM) and Bayesian estimation methods. However, such models require both disease presence and absence data which are not always easy to obtain. Novel spatial modelling methods such as maximum entropy (MAXENT) and the genetic algorithm for rule set production (GARP) require only disease presence data and have been used extensively in the fields of ecology and conservation, to model species distribution and habitat suitability. Other methods, such as multicriteria decision analysis (MCDA), use knowledge of the causal factors of disease occurrence to identify areas potentially suitable for disease. In addition to their less restrictive data requirements, some of these novel methods have been shown to outperform traditional statistical methods in predictive ability (Elith et al., 2006). This review paper provides details of some of these novel methods for mapping disease distribution, highlights their advantages and limitations, and identifies studies which have used the methods to model various aspects of disease distribution. Copyright © 2011. Published by Elsevier Ltd.

  9. Interactive searching of facial image databases

    NASA Astrophysics Data System (ADS)

    Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean

    1995-09-01

    A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.

  10. Formal methods demonstration project for space applications

    NASA Technical Reports Server (NTRS)

    Divito, Ben L.

    1995-01-01

    The Space Shuttle program is cooperating in a pilot project to apply formal methods to live requirements analysis activities. As one of the larger ongoing shuttle Change Requests (CR's), the Global Positioning System (GPS) CR involves a significant upgrade to the Shuttle's navigation capability. Shuttles are to be outfitted with GPS receivers and the primary avionics software will be enhanced to accept GPS-provided positions and integrate them into navigation calculations. Prior to implementing the CR, requirements analysts at Loral Space Information Systems, the Shuttle software contractor, must scrutinize the CR to identify and resolve any requirements issues. We describe an ongoing task of the Formal Methods Demonstration Project for Space Applications whose goal is to find an effective way to use formal methods in the GPS CR requirements analysis phase. This phase is currently under way and a small team from NASA Langley, ViGYAN Inc. and Loral is now engaged in this task. Background on the GPS CR is provided and an overview of the hardware/software architecture is presented. We outline the approach being taken to formalize the requirements, only a subset of which is being attempted. The approach features the use of the PVS specification language to model 'principal functions', which are major units of Shuttle software. Conventional state machine techniques form the basis of our approach. Given this background, we present interim results based on a snapshot of work in progress. Samples of requirements specifications rendered in PVS are offered to illustration. We walk through a specification sketch for the principal function known as GPS Receiver State processing. Results to date are summarized and feedback from Loral requirements analysts is highlighted. Preliminary data is shown comparing issues detected by the formal methods team versus those detected using existing requirements analysis methods. We conclude by discussing our plan to complete the remaining activities of this task.

  11. An evaluation of computerized adaptive testing for general psychological distress: combining GHQ-12 and Affectometer-2 in an item bank for public mental health research.

    PubMed

    Stochl, Jan; Böhnke, Jan R; Pickett, Kate E; Croudace, Tim J

    2016-05-20

    Recent developments in psychometric modeling and technology allow pooling well-validated items from existing instruments into larger item banks and their deployment through methods of computerized adaptive testing (CAT). Use of item response theory-based bifactor methods and integrative data analysis overcomes barriers in cross-instrument comparison. This paper presents the joint calibration of an item bank for researchers keen to investigate population variations in general psychological distress (GPD). Multidimensional item response theory was used on existing health survey data from the Scottish Health Education Population Survey (n = 766) to calibrate an item bank consisting of pooled items from the short common mental disorder screen (GHQ-12) and the Affectometer-2 (a measure of "general happiness"). Computer simulation was used to evaluate usefulness and efficacy of its adaptive administration. A bifactor model capturing variation across a continuum of population distress (while controlling for artefacts due to item wording) was supported. The numbers of items for different required reliabilities in adaptive administration demonstrated promising efficacy of the proposed item bank. Psychometric modeling of the common dimension captured by more than one instrument offers the potential of adaptive testing for GPD using individually sequenced combinations of existing survey items. The potential for linking other item sets with alternative candidate measures of positive mental health is discussed since an optimal item bank may require even more items than these.

  12. Shrinkage regression-based methods for microarray missing value imputation.

    PubMed

    Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng

    2013-01-01

    Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.

  13. A Least-Squares Finite Element Method for Electromagnetic Scattering Problems

    NASA Technical Reports Server (NTRS)

    Wu, Jie; Jiang, Bo-nan

    1996-01-01

    The least-squares finite element method (LSFEM) is applied to electromagnetic scattering and radar cross section (RCS) calculations. In contrast to most existing numerical approaches, in which divergence-free constraints are omitted, the LSFF-M directly incorporates two divergence equations in the discretization process. The importance of including the divergence equations is demonstrated by showing that otherwise spurious solutions with large divergence occur near the scatterers. The LSFEM is based on unstructured grids and possesses full flexibility in handling complex geometry and local refinement Moreover, the LSFEM does not require any special handling, such as upwinding, staggered grids, artificial dissipation, flux-differencing, etc. Implicit time discretization is used and the scheme is unconditionally stable. By using a matrix-free iterative method, the computational cost and memory requirement for the present scheme is competitive with other approaches. The accuracy of the LSFEM is verified by several benchmark test problems.

  14. Characterization of a Laser-Generated Perturbation in High-Speed Flow for Receptivity Studies

    NASA Technical Reports Server (NTRS)

    Chou, Amanda; Schneider, Steven P.; Kegerise, Michael A.

    2014-01-01

    A better understanding of receptivity can contribute to the development of an amplitude-based method of transition prediction. This type of prediction model would incorporate more physics than the semi-empirical methods, which are widely used. The experimental study of receptivity requires a characterization of the external disturbances and a study of their effect on the boundary layer instabilities. Characterization measurements for a laser-generated perturbation were made in two different wind tunnels. These measurements were made with hot-wire probes, optical techniques, and pressure transducer probes. Existing methods all have their limitations, so better measurements will require the development of new instrumentation. Nevertheless, the freestream laser-generated perturbation has been shown to be about 6 mm in diameter at a static density of about 0.045 kg/cubic m. The amplitude of the perturbation is large, which may be unsuitable for the study of linear growth.

  15. Analytical Method to Evaluate Failure Potential During High-Risk Component Development

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Stone, Robert B.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Communicating failure mode information during design and manufacturing is a crucial task for failure prevention. Most processes use Failure Modes and Effects types of analyses, as well as prior knowledge and experience, to determine the potential modes of failures a product might encounter during its lifetime. When new products are being considered and designed, this knowledge and information is expanded upon to help designers extrapolate based on their similarity with existing products and the potential design tradeoffs. This paper makes use of similarities and tradeoffs that exist between different failure modes based on the functionality of each component/product. In this light, a function-failure method is developed to help the design of new products with solutions for functions that eliminate or reduce the potential of a failure mode. The method is applied to a simplified rotating machinery example in this paper, and is proposed as a means to account for helicopter failure modes during design and production, addressing stringent safety and performance requirements for NASA applications.

  16. Design optimization of piezoresistive cantilevers for force sensing in air and water

    PubMed Central

    Doll, Joseph C.; Park, Sung-Jin; Pruitt, Beth L.

    2009-01-01

    Piezoresistive cantilevers fabricated from doped silicon or metal films are commonly used for force, topography, and chemical sensing at the micro- and macroscales. Proper design is required to optimize the achievable resolution by maximizing sensitivity while simultaneously minimizing the integrated noise over the bandwidth of interest. Existing analytical design methods are insufficient for modeling complex dopant profiles, design constraints, and nonlinear phenomena such as damping in fluid. Here we present an optimization method based on an analytical piezoresistive cantilever model. We use an existing iterative optimizer to minimimize a performance goal, such as minimum detectable force. The design tool is available as open source software. Optimal cantilever design and performance are found to strongly depend on the measurement bandwidth and the constraints applied. We discuss results for silicon piezoresistors fabricated by epitaxy and diffusion, but the method can be applied to any dopant profile or material which can be modeled in a similar fashion or extended to other microelectromechanical systems. PMID:19865512

  17. Image inpainting and super-resolution using non-local recursive deep convolutional network with skip connections

    NASA Astrophysics Data System (ADS)

    Liu, Miaofeng

    2017-07-01

    In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously

  18. Stability basin estimates fall risk from observed kinematics, demonstrated on the Sit-to-Stand task.

    PubMed

    Shia, Victor; Moore, Talia Yuki; Holmes, Patrick; Bajcsy, Ruzena; Vasudevan, Ram

    2018-04-27

    The ability to quantitatively measure stability is essential to ensuring the safety of locomoting systems. While the response to perturbation directly reflects the stability of a motion, this experimental method puts human subjects at risk. Unfortunately, existing indirect methods for estimating stability from unperturbed motion have been shown to have limited predictive power. This paper leverages recent advances in dynamical systems theory to accurately estimate the stability of human motion without requiring perturbation. This approach relies on kinematic observations of a nominal Sit-to-Stand motion to construct an individual-specific dynamic model, input bounds, and feedback control that are then used to compute the set of perturbations from which the model can recover. This set, referred to as the stability basin, was computed for 14 individuals, and was able to successfully differentiate between less and more stable Sit-to-Stand strategies for each individual with greater accuracy than existing methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Reevaluation of air surveillance station siting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, K.; Jannik, T.

    2016-07-06

    DOE Technical Standard HDBK-1216-2015 (DOE 2015) recommends evaluating air-monitoring station placement using the analytical method developed by Waite. The technique utilizes wind rose and population distribution data in order to determine a weighting factor for each directional sector surrounding a nuclear facility. Based on the available resources (number of stations) and a scaling factor, this weighting factor is used to determine the number of stations recommended to be placed in each sector considered. An assessment utilizing this method was performed in 2003 to evaluate the effectiveness of the existing SRS air-monitoring program. The resulting recommended distribution of air-monitoring stations wasmore » then compared to that of the existing site perimeter surveillance program. The assessment demonstrated that the distribution of air-monitoring stations at the time generally agreed with the results obtained using the Waite method; however, at the time new stations were established in Barnwell and in Williston in order to meet requirements of DOE guidance document EH-0173T.« less

  20. Dereplication, Aggregation and Scoring Tool (DAS Tool) v1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SIEBER, CHRISTIAN

    Communities of uncultivated microbes are critical to ecosystem function and microorganism health, and a key objective of metagenomic studies is to analyze organism-specific metabolic pathways and reconstruct community interaction networks. This requires accurate assignment of genes to genomes, yet existing binning methods often fail to predict a reasonable number of genomes and report many bins of low quality and completeness. Furthermore, the performance of existing algorithms varies between samples and biotypes. Here, we present a dereplication, aggregation and scoring strategy, DAS Tool, that combines the strengths of a flexible set of established binning algorithms. DAS Tools applied to a constructedmore » community generated more accurate bins than any automated method. Further, when applied to samples of different complexity, including soil, natural oil seeps, and the human gut, DAS Tool recovered substantially more near-complete genomes than any single binning method alone. Included were three genomes from a novel lineage . The ability to reconstruct many near-complete genomes from metagenomics data will greatly advance genome-centric analyses of ecosystems.« less

  1. Electro-hydrodynamic printing of drugs onto edible substrates

    NASA Astrophysics Data System (ADS)

    Shen, Yueyang; Elele, Ezinwa; Palle, Prashanth; Khusid, Boris; Basaran, Osman; McGough, Patrick T.; Collins, Robert T.

    2009-11-01

    While most existing drugs are manufactured as tablets using powder processing techniques, there is growing interest in printing drops containing pharmaceutical actives on edible substrates. We have developed a drop-on-demand (DOD) printing method appropriate for either replacing existing manufacturing platforms or enabling personalized medicine that overcomes the various critical challenges facing current DOD technologies. To eliminate adverse effects of electro-chemical reactions at the fluid-electrode interface, the fluid is infused into an electrically insulating nozzle to form a pendant drop that serves as a floating electrode capacitively coupled to external electrodes. A liquid bridge is formed and broken as the voltage applied at the electrode is varied in time. This gentle method for drop deposition has been demonstrated to operate with fluids spanning over three orders of magnitude in viscosity and conductivity. The proposed method has the potential for the evolving field of pharmaceutical and biomedical applications requiring the deposition of fluids at the exact locations with high volume accuracy.

  2. Segmentation of Image Ensembles via Latent Atlases

    PubMed Central

    Van Leemput, Koen; Menze, Bjoern H.; Wells, William M.; Golland, Polina

    2010-01-01

    Spatial priors, such as probabilistic atlases, play an important role in MRI segmentation. However, the availability of comprehensive, reliable and suitable manual segmentations for atlas construction is limited. We therefore propose a method for joint segmentation of corresponding regions of interest in a collection of aligned images that does not require labeled training data. Instead, a latent atlas, initialized by at most a single manual segmentation, is inferred from the evolving segmentations of the ensemble. The algorithm is based on probabilistic principles but is solved using partial differential equations (PDEs) and energy minimization criteria. We evaluate the method on two datasets, segmenting subcortical and cortical structures in a multi-subject study and extracting brain tumors in a single-subject multi-modal longitudinal experiment. We compare the segmentation results to manual segmentations, when those exist, and to the results of a state-of-the-art atlas-based segmentation method. The quality of the results supports the latent atlas as a promising alternative when existing atlases are not compatible with the images to be segmented. PMID:20580305

  3. Information Theory for Gabor Feature Selection for Face Recognition

    NASA Astrophysics Data System (ADS)

    Shen, Linlin; Bai, Li

    2006-12-01

    A discriminative and robust feature—kernel enhanced informative Gabor feature—is proposed in this paper for face recognition. Mutual information is applied to select a set of informative and nonredundant Gabor features, which are then further enhanced by kernel methods for recognition. Compared with one of the top performing methods in the 2004 Face Verification Competition (FVC2004), our methods demonstrate a clear advantage over existing methods in accuracy, computation efficiency, and memory cost. The proposed method has been fully tested on the FERET database using the FERET evaluation protocol. Significant improvements on three of the test data sets are observed. Compared with the classical Gabor wavelet-based approaches using a huge number of features, our method requires less than 4 milliseconds to retrieve a few hundreds of features. Due to the substantially reduced feature dimension, only 4 seconds are required to recognize 200 face images. The paper also unified different Gabor filter definitions and proposed a training sample generation algorithm to reduce the effects caused by unbalanced number of samples available in different classes.

  4. An Equation-Free Reduced-Order Modeling Approach to Tropical Pacific Simulation

    NASA Astrophysics Data System (ADS)

    Wang, Ruiwen; Zhu, Jiang; Luo, Zhendong; Navon, I. M.

    2009-03-01

    The “equation-free” (EF) method is often used in complex, multi-scale problems. In such cases it is necessary to know the closed form of the required evolution equations about oscopic variables within some applied fields. Conceptually such equations exist, however, they are not available in closed form. The EF method can bypass this difficulty. This method can obtain oscopic information by implementing models at a microscopic level. Given an initial oscopic variable, through lifting we can obtain the associated microscopic variable, which may be evolved using Direct Numerical Simulations (DNS) and by restriction, we can obtain the necessary oscopic information and the projective integration to obtain the desired quantities. In this paper we apply the EF POD-assisted method to the reduced modeling of a large-scale upper ocean circulation in the tropical Pacific domain. The computation cost is reduced dramatically. Compared with the POD method, the method provided more accurate results and it did not require the availability of any explicit equations or the right-hand side (RHS) of the evolution equation.

  5. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  6. Some lemma on spectrum of eigen value regarding power method

    NASA Astrophysics Data System (ADS)

    Jamali, A. R. M. Jalal Uddin; Alam, Md. Sah

    2017-04-01

    Eigen value problems arise in almost all science and engineering fields. There exist some smart methods in literature in which most of them are able to find only Eigen values but could not find corresponding Eigen vectors. There exist many engineering as well as scientific fields in which both largest as well as smallest Eigen pairs are required. Power method is very simple but a powerful tool for finding largest Eigen value and corresponding Eigen vector (Eigen-pair). Again Inverse Power method is applied to find out smallest Eigen-pair and/or desire Eigen-pairs. But it is known that Inverse Power method is computationally very costly. On the other hand by using shifting property, Power method can find further Eigen-pairs. But the position of this Eigen value in the set of spectrum of the Eigen values is not identified. In this regard we proposed four lemma associate with Modified Power method. Each Lemma is proved ornately. The Modified Power method is implemented and illustrates an example for the verification of the Lemma. By using lemma the modified power algorithm is able to find out both largest and smallest Eigen-pairs successfully and efficiently in some cases. Moreover by the help of the Lemma, algorithm is able to detect the nature (positive and negative) of the Eigen values.

  7. Assessing biodiversity on the farm scale as basis for ecosystem service payments.

    PubMed

    von Haaren, Christina; Kempa, Daniela; Vogel, Katrin; Rüter, Stefan

    2012-12-30

    Ecosystem services payments must be based on a standardised transparent assessment of the goods and services provided. This is especially relevant in the context of EU agri-environmental programs, but also for organic-food companies that foster environmental services on their contractor farms. Addressing the farm scale is important because land users/owners are major recipients of payments and they could be more involved in data generation and conservation management. A standardised system for measuring on-farm biodiversity does not yet exist that concentrates on performance indicators and includes farmers in generating information. A method is required that produces ordinal or metric scaled assessment results as well as management measures. Another requirement is the ease of application, which includes the ease of gathering input data and understandability. In order to respond to this need, we developed a method which is designed for automated application in an open source farm assessment system named MANUELA. The method produces an ordinal scale assessment of biodiversity that includes biotopes, species, biotope connectivity and the influence of land use. In addition, specific measures for biotope types are proposed. The open source geographical information system OpenJump is used for the implementation of MANUELA. The results of the trial applications and robustness tests show that the assessment can be implemented, for the most part, using existing information as well as data available from farmers or advisors. The results are more sensitive for showing on-farm achievements and changes than existing biotope-type classifications. Such a differentiated classification is needed as a basis for ecosystem service payments and for designing effective measures. The robustness of the results with respect to biotope connectivity is comparable to that of complex models, but it should be further improved. Interviews with the test farmers substantiate that the assessment methods can be implemented on farms and they are understood by farmers. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. What methods are used to apply positive deviance within healthcare organisations? A systematic review

    PubMed Central

    Baxter, Ruth; Taylor, Natalie; Kellar, Ian; Lawton, Rebecca

    2016-01-01

    Background The positive deviance approach focuses on those who demonstrate exceptional performance, despite facing the same constraints as others. ‘Positive deviants’ are identified and hypotheses about how they succeed are generated. These hypotheses are tested and then disseminated within the wider community. The positive deviance approach is being increasingly applied within healthcare organisations, although limited guidance exists and different methods, of varying quality, are used. This paper systematically reviews healthcare applications of the positive deviance approach to explore how positive deviance is defined, the quality of existing applications and the methods used within them, including the extent to which staff and patients are involved. Methods Peer-reviewed articles, published prior to September 2014, reporting empirical research on the use of the positive deviance approach within healthcare, were identified from seven electronic databases. A previously defined four-stage process for positive deviance in healthcare was used as the basis for data extraction. Quality assessments were conducted using a validated tool, and a narrative synthesis approach was followed. Results 37 of 818 articles met the inclusion criteria. The positive deviance approach was most frequently applied within North America, in secondary care, and to address healthcare-associated infections. Research predominantly identified positive deviants and generated hypotheses about how they succeeded. The approach and processes followed were poorly defined. Research quality was low, articles lacked detail and comparison groups were rarely included. Applications of positive deviance typically lacked staff and/or patient involvement, and the methods used often required extensive resources. Conclusion Further research is required to develop high quality yet practical methods which involve staff and patients in all stages of the positive deviance approach. The efficacy and efficiency of positive deviance must be assessed and compared with other quality improvement approaches. PROSPERO registration number CRD42014009365. PMID:26590198

  9. A review of parameters and heuristics for guiding metabolic pathfinding.

    PubMed

    Kim, Sarah M; Peña, Matthew I; Moll, Mark; Bennett, George N; Kavraki, Lydia E

    2017-09-15

    Recent developments in metabolic engineering have led to the successful biosynthesis of valuable products, such as the precursor of the antimalarial compound, artemisinin, and opioid precursor, thebaine. Synthesizing these traditionally plant-derived compounds in genetically modified yeast cells introduces the possibility of significantly reducing the total time and resources required for their production, and in turn, allows these valuable compounds to become cheaper and more readily available. Most biosynthesis pathways used in metabolic engineering applications have been discovered manually, requiring a tedious search of existing literature and metabolic databases. However, the recent rapid development of available metabolic information has enabled the development of automated approaches for identifying novel pathways. Computer-assisted pathfinding has the potential to save biochemists time in the initial discovery steps of metabolic engineering. In this paper, we review the parameters and heuristics used to guide the search in recent pathfinding algorithms. These parameters and heuristics capture information on the metabolic network structure, compound structures, reaction features, and organism-specificity of pathways. No one metabolic pathfinding algorithm or search parameter stands out as the best to use broadly for solving the pathfinding problem, as each method and parameter has its own strengths and shortcomings. As assisted pathfinding approaches continue to become more sophisticated, the development of better methods for visualizing pathway results and integrating these results into existing metabolic engineering practices is also important for encouraging wider use of these pathfinding methods.

  10. Arctic lead detection using a waveform mixture algorithm from CryoSat-2 data

    NASA Astrophysics Data System (ADS)

    Lee, Sanggyun; Kim, Hyun-cheol; Im, Jungho

    2018-05-01

    We propose a waveform mixture algorithm to detect leads from CryoSat-2 data, which is novel and different from the existing threshold-based lead detection methods. The waveform mixture algorithm adopts the concept of spectral mixture analysis, which is widely used in the field of hyperspectral image analysis. This lead detection method was evaluated with high-resolution (250 m) MODIS images and showed comparable and promising performance in detecting leads when compared to the previous methods. The robustness of the proposed approach also lies in the fact that it does not require the rescaling of parameters (i.e., stack standard deviation, stack skewness, stack kurtosis, pulse peakiness, and backscatter σ0), as it directly uses L1B waveform data, unlike the existing threshold-based methods. Monthly lead fraction maps were produced by the waveform mixture algorithm, which shows interannual variability of recent sea ice cover during 2011-2016, excluding the summer season (i.e., June to September). We also compared the lead fraction maps to other lead fraction maps generated from previously published data sets, resulting in similar spatiotemporal patterns.

  11. Normal response function method for mass and stiffness matrix updating using complex FRFs

    NASA Astrophysics Data System (ADS)

    Pradhan, S.; Modak, S. V.

    2012-10-01

    Quite often a structural dynamic finite element model is required to be updated so as to accurately predict the dynamic characteristics like natural frequencies and the mode shapes. Since in many situations undamped natural frequencies and mode shapes need to be predicted, it has generally been the practice in these situations to seek updating of only mass and stiffness matrix so as to obtain a reliable prediction model. Updating using frequency response functions (FRFs) has been one of the widely used approaches for updating, including updating of mass and stiffness matrices. However, the problem with FRF based methods, for updating mass and stiffness matrices, is that these methods are based on use of complex FRFs. Use of complex FRFs to update mass and stiffness matrices is not theoretically correct as complex FRFs are not only affected by these two matrices but also by the damping matrix. Therefore, in situations where updating of only mass and stiffness matrices using FRFs is required, the use of complex FRFs based updating formulation is not fully justified and would lead to inaccurate updated models. This paper addresses this difficulty and proposes an improved FRF based finite element model updating procedure using the concept of normal FRFs. The proposed method is a modified version of the existing response function method that is based on the complex FRFs. The effectiveness of the proposed method is validated through a numerical study of a simple but representative beam structure. The effect of coordinate incompleteness and robustness of method under presence of noise is investigated. The results of updating obtained by the improved method are compared with the existing response function method. The performance of the two approaches is compared for cases of light, medium and heavily damped structures. It is found that the proposed improved method is effective in updating of mass and stiffness matrices in all the cases of complete and incomplete data and with all levels and types of damping.

  12. Quasivariational Solutions for First Order Quasilinear Equations with Gradient Constraint

    NASA Astrophysics Data System (ADS)

    Rodrigues, José Francisco; Santos, Lisa

    2012-08-01

    We prove the existence of solutions for a quasi-variational inequality of evolution with a first order quasilinear operator and a variable convex set which is characterized by a constraint on the absolute value of the gradient that depends on the solution itself. The only required assumption on the nonlinearity of this constraint is its continuity and positivity. The method relies on an appropriate parabolic regularization and suitable a priori estimates. We also obtain the existence of stationary solutions by studying the asymptotic behaviour in time. In the variational case, corresponding to a constraint independent of the solution, we also give uniqueness results.

  13. Toward Dietary Assessment via Mobile Phone Video Cameras.

    PubMed

    Chen, Nicholas; Lee, Yun Young; Rabb, Maurice; Schatz, Bruce

    2010-11-13

    Reliable dietary assessment is a challenging yet essential task for determining general health. Existing efforts are manual, require considerable effort, and are prone to underestimation and misrepresentation of food intake. We propose leveraging mobile phones to make this process faster, easier and automatic. Using mobile phones with built-in video cameras, individuals capture short videos of their meals; our software then automatically analyzes the videos to recognize dishes and estimate calories. Preliminary experiments on 20 typical dishes from a local cafeteria show promising results. Our approach complements existing dietary assessment methods to help individuals better manage their diet to prevent obesity and other diet-related diseases.

  14. III: Use of biomarkers as Risk Indicators in Environmental Risk Assessment of oil based discharges offshore.

    PubMed

    Sanni, Steinar; Lyng, Emily; Pampanin, Daniela M

    2017-06-01

    Offshore oil and gas activities are required not to cause adverse environmental effects, and risk based management has been established to meet environmental standards. In some risk assessment schemes, Risk Indicators (RIs) are parameters to monitor the development of risk affecting factors. RIs have not yet been established in the Environmental Risk Assessment procedures for management of oil based discharges offshore. This paper evaluates the usefulness of biomarkers as RIs, based on their properties, existing laboratory biomarker data and assessment methods. Data shows several correlations between oil concentrations and biomarker responses, and assessment principles exist that qualify biomarkers for integration into risk procedures. Different ways that these existing biomarkers and methods can be applied as RIs in a probabilistic risk assessment system when linked with whole organism responses are discussed. This can be a useful approach to integrate biomarkers into probabilistic risk assessment related to oil based discharges, representing a potential supplement to information that biomarkers already provide about environmental impact and risk related to these kind of discharges. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudheer, C. D.; Krishnan, S.; Srinivasan, A.

    Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receivingmore » at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.« less

  16. Applying formal methods and object-oriented analysis to existing flight software

    NASA Technical Reports Server (NTRS)

    Cheng, Betty H. C.; Auernheimer, Brent

    1993-01-01

    Correctness is paramount for safety-critical software control systems. Critical software failures in medical radiation treatment, communications, and defense are familiar to the public. The significant quantity of software malfunctions regularly reported to the software engineering community, the laws concerning liability, and a recent NRC Aeronautics and Space Engineering Board report additionally motivate the use of error-reducing and defect detection software development techniques. The benefits of formal methods in requirements driven software development ('forward engineering') is well documented. One advantage of rigorously engineering software is that formal notations are precise, verifiable, and facilitate automated processing. This paper describes the application of formal methods to reverse engineering, where formal specifications are developed for a portion of the shuttle on-orbit digital autopilot (DAP). Three objectives of the project were to: demonstrate the use of formal methods on a shuttle application, facilitate the incorporation and validation of new requirements for the system, and verify the safety-critical properties to be exhibited by the software.

  17. Advancements in IR spectroscopic approaches for the determination of fungal derived contaminations in food crops.

    PubMed

    McMullin, David; Mizaikoff, Boris; Krska, Rudolf

    2015-01-01

    Infrared spectroscopy is a rapid, nondestructive analytical technique that can be applied to the authentication and characterization of food samples in high throughput. In particular, near infrared spectroscopy is commonly utilized in the food quality control industry to monitor the physical attributes of numerous cereal grains for protein, carbohydrate, and lipid content. IR-based methods require little sample preparation, labor, or technical competence if multivariate data mining techniques are implemented; however, they do require extensive calibration. Economically important crops are infected by fungi that can severely reduce crop yields and quality and, in addition, produce mycotoxins. Owing to the health risks associated with mycotoxins in the food chain, regulatory limits have been set by both national and international institutions for specific mycotoxins and mycotoxin classes. This article discusses the progress and potential of IR-based methods as an alternative to existing chemical methods for the determination of fungal contamination in crops, as well as emerging spectroscopic methods.

  18. A review on methods of regeneration of spent pickling solutions from steel processing.

    PubMed

    Regel-Rosocka, Magdalena

    2010-05-15

    The review presents various techniques of regeneration of spent pickling solutions, including the methods with acid recovery, such as diffusion dialysis, electrodialysis, membrane electrolysis and membrane distillation, evaporation, precipitation and spray roasting as well as those with acid and metal recovery: ion exchange, retardation, crystallization solvent and membrane extraction. Advantages and disadvantages of the techniques are presented, discussed and confronted with the best available techniques requirements. Most of the methods presented meet the BAT requirements. The best available techniques are electrodialysis, diffusion dialysis and crystallization; however, in practice spray roasting and retardation/ion-exchange are applied most frequently for spent pickling solution regeneration. As "waiting for their chance" solvent extraction, non-dispersive solvent extraction and membrane distillation should be indicated because they are well investigated and developed. Environmental and economic benefits of the methods presented in the review depend on the cost of chemicals and wastewater treatment, legislative regulations and cost of modernization of existing technologies or implementation of new ones. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  19. An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling

    NASA Astrophysics Data System (ADS)

    Wang, Enjiang; Liu, Yang

    2018-01-01

    The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less

  1. Water Mapping Using Multispectral Airborne LIDAR Data

    NASA Astrophysics Data System (ADS)

    Yan, W. Y.; Shaker, A.; LaRocque, P. E.

    2018-04-01

    This study investigates the use of the world's first multispectral airborne LiDAR sensor, Optech Titan, manufactured by Teledyne Optech to serve the purpose of automatic land-water classification with a particular focus on near shore region and river environment. Although there exist recent studies utilizing airborne LiDAR data for shoreline detection and water surface mapping, the majority of them only perform experimental testing on clipped data subset or rely on data fusion with aerial/satellite image. In addition, most of the existing approaches require manual intervention or existing tidal/datum data for sample collection of training data. To tackle the drawbacks of previous approaches, we propose and develop an automatic data processing workflow for land-water classification using multispectral airborne LiDAR data. Depending on the nature of the study scene, two methods are proposed for automatic training data selection. The first method utilizes the elevation/intensity histogram fitted with Gaussian mixture model (GMM) to preliminarily split the land and water bodies. The second method mainly relies on the use of a newly developed scan line elevation intensity ratio (SLIER) to estimate the water surface data points. Regardless of the training methods being used, feature spaces can be constructed using the multispectral LiDAR intensity, elevation and other features derived from these parameters. The comprehensive workflow was tested with two datasets collected for different near shore region and river environment, where the overall accuracy yielded better than 96 %.

  2. Fuzzy Logic Controller Stability Analysis Using a Satisfiability Modulo Theories Approach

    NASA Technical Reports Server (NTRS)

    Arnett, Timothy; Cook, Brandon; Clark, Matthew A.; Rattan, Kuldip

    2017-01-01

    While many widely accepted methods and techniques exist for validation and verification of traditional controllers, at this time no solutions have been accepted for Fuzzy Logic Controllers (FLCs). Due to the highly nonlinear nature of such systems, and the fact that developing a valid FLC does not require a mathematical model of the system, it is quite difficult to use conventional techniques to prove controller stability. Since safety-critical systems must be tested and verified to work as expected for all possible circumstances, the fact that FLC controllers cannot be tested to achieve such requirements poses limitations on the applications for such technology. Therefore, alternative methods for verification and validation of FLCs needs to be explored. In this study, a novel approach using formal verification methods to ensure the stability of a FLC is proposed. Main research challenges include specification of requirements for a complex system, conversion of a traditional FLC to a piecewise polynomial representation, and using a formal verification tool in a nonlinear solution space. Using the proposed architecture, the Fuzzy Logic Controller was found to always generate negative feedback, but inconclusive for Lyapunov stability.

  3. Tank 241-AX-104 upper vadose zone cone penetrometer demonstration sampling and analysis plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FIELD, J.G.

    1999-02-02

    This sampling and analysis plan (SAP) is the primary document describing field and laboratory activities and requirements for the tank 241-AX-104 upper vadose zone cone penetrometer (CP) demonstration. It is written in accordance with Hanford Tank Initiative Tank 241-AX-104 Upper Vadose Zone Demonstration Data Quality Objective (Banning 1999). This technology demonstration, to be conducted at tank 241-AX-104, is being performed by the Hanford Tanks Initiative (HTI) Project as a part of Tank Waste Remediation System (TWRS) Retrieval Program (EM-30) and the Office of Science and Technology (EM-50) Tanks Focus Area. Sample results obtained as part of this demonstration will providemore » additional information for subsequent revisions to the Retrieval Performance Evaluation (RPE) report (Jacobs 1998). The RPE Report is the result of an evaluation of a single tank farm (AX Tank Farm) used as the basis for demonstrating a methodology for developing the data and analyses necessary to support making tank waste retrieval decisions within the context of tank farm closure requirements. The RPE includes a study of vadose zone contaminant transport mechanisms, including analysis of projected tank leak characteristics, hydrogeologic characteristics of tank farm soils, and the observed distribution of contaminants in the vadose zone in the tank farms. With limited characterization information available, large uncertainties exist as to the nature and extent of contaminants that may exist in the upper vadose zone in the AX Tank Farm. Traditionally, data has been collected from soils in the vadose zone through the installation of boreholes and wells. Soil samples are collected as the bore hole is advanced and samples are screened on site and/or sent to a laboratory for analysis. Some in-situ geophysical methods of contaminant analysis can be used to evaluate radionuclide levels in the soils adjacent to an existing borehole. However, geophysical methods require compensation for well casing interference and soil moisture content and may not be successful in some conditions. In some cases the level of interference must be estimated due to uncertainties regarding the materials used in well construction and soil conditions, Well casing deployment used for many in-situ geophysical methods is relatively expensive and geophysical methods do not generally provide real time values for contaminants. In addition, some of these methods are not practical within the boundaries of the tank farm due to physical constraints, such as underground piping and other hardware. The CP technologies could facilitate future characterization of vadose zone soils by providing vadose zone data in near real-time, reducing the number of soil samples and boreholes required, and reducing characterization costs.« less

  4. Interpolation of orientation distribution functions in diffusion weighted imaging using multi-tensor model.

    PubMed

    Afzali, Maryam; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid

    2015-09-30

    Diffusion weighted imaging (DWI) is a non-invasive method for investigating the brain white matter structure and can be used to evaluate fiber bundles. However, due to practical constraints, DWI data acquired in clinics are low resolution. This paper proposes a method for interpolation of orientation distribution functions (ODFs). To this end, fuzzy clustering is applied to segment ODFs based on the principal diffusion directions (PDDs). Next, a cluster is modeled by a tensor so that an ODF is represented by a mixture of tensors. For interpolation, each tensor is rotated separately. The method is applied on the synthetic and real DWI data of control and epileptic subjects. Both experiments illustrate capability of the method in increasing spatial resolution of the data in the ODF field properly. The real dataset show that the method is capable of reliable identification of differences between temporal lobe epilepsy (TLE) patients and normal subjects. The method is compared to existing methods. Comparison studies show that the proposed method generates smaller angular errors relative to the existing methods. Another advantage of the method is that it does not require an iterative algorithm to find the tensors. The proposed method is appropriate for increasing resolution in the ODF field and can be applied to clinical data to improve evaluation of white matter fibers in the brain. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Demineralization of drinking water: Is it prudent?

    PubMed

    Verma, K C; Kushwaha, A S

    2014-10-01

    Water is the elixir of life. The requirement of water for very existence of life and preservation of health has driven man to devise methods for maintaining its purity and wholesomeness. The water can get contaminated, polluted and become a potential hazard to human health. Water in its purest form devoid of natural minerals can also be the other end of spectrum where health could be adversely affected. Limited availability of fresh water and increased requirements has led to an increased usage of personal, domestic and commercial methods of purification of water. Desalination of saline water where fresh water is in limited supply has led to development of the latest technology of reverse osmosis but is it going to be safe to use such demineralized water over a long duration needs to be debated and discussed.

  6. RTO Technical Publications: A Quarterly Listing

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This is a listing of recent unclassified RTO technical publications processed by the NASA Center for AeroSpace Information covering the period from July 1, 2005 to September 30, 2005; and available in the NASA Aeronautics and Space Database. Contents include: Aeroelastic Deformation: Adaptation of Wind Tunnel Measurement Concepts to Full-Scale Vehicle Flight Testing; Actively Controlling Buffet-Induced Excitations; Modelling and Simulation to Address NATO's New and Existing Military Requirements; Latency in Visionic Systems: Test Methods and Requirements; Personal Hearing Protection including Active Noise Reduction; Virtual Laboratory Enabling Collaborative Research in Applied Vehicle Technologies; A Method to Analyze Tail Buffet Loads of Aircraft; Particle Image Velocimetry Measurements to Evaluate the Effectiveness of Deck-Edge Columnar Vortex Generators on Aircraft Carriers; Introduction to Flight Test Engineering, Volume 14; Pathological Aspects and Associated Biodynamics in Aircraft Accident Investigation;

  7. Comments on settling chamber design for quiet, blowdown wind tunnels

    NASA Technical Reports Server (NTRS)

    Beckwith, I. E.

    1981-01-01

    Transfer of an existing continous circuit supersonic wind tunnel to Langley and its operation there as a blowdown tunnel is planned. Flow disturbance requirements in the supply section and methods for reducing the high level broad band acoustic disturbances present in typical blowdown tunnels are reviewed. Based on recent data and the analysis of two blowdown facilities at Langley, methods for reducing the total turbulence levels in the settling chamber, including both acoustic and vorticity modes, to less than one percent are recommended. The pertinent design details of the damping screens and honeycomb and the recommended minimum pressure drop across the porous components providing the required two orders of magnitude attenuation of acoustic noise levels are given. A suggestion for the support structure of these high pressure drop porous components is offered.

  8. XDesign: an open-source software package for designing X-ray imaging phantoms and experiments.

    PubMed

    Ching, Daniel J; Gürsoy, Dogˇa

    2017-03-01

    The development of new methods or utilization of current X-ray computed tomography methods is impeded by the substantial amount of expertise required to design an X-ray computed tomography experiment from beginning to end. In an attempt to make material models, data acquisition schemes and reconstruction algorithms more accessible to researchers lacking expertise in some of these areas, a software package is described here which can generate complex simulated phantoms and quantitatively evaluate new or existing data acquisition schemes and image reconstruction algorithms for targeted applications.

  9. XDesign: An open-source software package for designing X-ray imaging phantoms and experiments

    DOE PAGES

    Ching, Daniel J.; Gursoy, Dogˇa

    2017-02-21

    Here, the development of new methods or utilization of current X-ray computed tomography methods is impeded by the substantial amount of expertise required to design an X-ray computed tomography experiment from beginning to end. In an attempt to make material models, data acquisition schemes and reconstruction algorithms more accessible to researchers lacking expertise in some of these areas, a software package is described here which can generate complex simulated phantoms and quantitatively evaluate new or existing data acquisition schemes and image reconstruction algorithms for targeted applications.

  10. Sparse Matrix for ECG Identification with Two-Lead Features.

    PubMed

    Tseng, Kuo-Kun; Luo, Jiao; Hegarty, Robert; Wang, Wenmin; Haiting, Dong

    2015-01-01

    Electrocardiograph (ECG) human identification has the potential to improve biometric security. However, improvements in ECG identification and feature extraction are required. Previous work has focused on single lead ECG signals. Our work proposes a new algorithm for human identification by mapping two-lead ECG signals onto a two-dimensional matrix then employing a sparse matrix method to process the matrix. And that is the first application of sparse matrix techniques for ECG identification. Moreover, the results of our experiments demonstrate the benefits of our approach over existing methods.

  11. Results of a Formal Methods Demonstration Project

    NASA Technical Reports Server (NTRS)

    Kelly, J.; Covington, R.; Hamilton, D.

    1994-01-01

    This paper describes the results of a cooperative study conducted by a team of researchers in formal methods at three NASA Centers to demonstrate FM techniques and to tailor them to critical NASA software systems. This pilot project applied FM to an existing critical software subsystem, the Shuttle's Jet Select subsystem (Phase I of an ongoing study). The present study shows that FM can be used successfully to uncover hidden issues in a highly critical and mature Functional Subsystem Software Requirements (FSSR) specification which are very difficult to discover by traditional means.

  12. Efficient full-chip SRAF placement using machine learning for best accuracy and improved consistency

    NASA Astrophysics Data System (ADS)

    Wang, Shibing; Baron, Stanislas; Kachwala, Nishrin; Kallingal, Chidam; Sun, Dezheng; Shu, Vincent; Fong, Weichun; Li, Zero; Elsaid, Ahmad; Gao, Jin-Wei; Su, Jing; Ser, Jung-Hoon; Zhang, Quan; Chen, Been-Der; Howell, Rafael; Hsu, Stephen; Luo, Larry; Zou, Yi; Zhang, Gary; Lu, Yen-Wen; Cao, Yu

    2018-03-01

    Various computational approaches from rule-based to model-based methods exist to place Sub-Resolution Assist Features (SRAF) in order to increase process window for lithography. Each method has its advantages and drawbacks, and typically requires the user to make a trade-off between time of development, accuracy, consistency and cycle time. Rule-based methods, used since the 90 nm node, require long development time and struggle to achieve good process window performance for complex patterns. Heuristically driven, their development is often iterative and involves significant engineering time from multiple disciplines (Litho, OPC and DTCO). Model-based approaches have been widely adopted since the 20 nm node. While the development of model-driven placement methods is relatively straightforward, they often become computationally expensive when high accuracy is required. Furthermore these methods tend to yield less consistent SRAFs due to the nature of the approach: they rely on a model which is sensitive to the pattern placement on the native simulation grid, and can be impacted by such related grid dependency effects. Those undesirable effects tend to become stronger when more iterations or complexity are needed in the algorithm to achieve required accuracy. ASML Brion has developed a new SRAF placement technique on the Tachyon platform that is assisted by machine learning and significantly improves the accuracy of full chip SRAF placement while keeping consistency and runtime under control. A Deep Convolutional Neural Network (DCNN) is trained using the target wafer layout and corresponding Continuous Transmission Mask (CTM) images. These CTM images have been fully optimized using the Tachyon inverse mask optimization engine. The neural network generated SRAF guidance map is then used to place SRAF on full-chip. This is different from our existing full-chip MB-SRAF approach which utilizes a SRAF guidance map (SGM) of mask sensitivity to improve the contrast of optical image at the target pattern edges. In this paper, we demonstrate that machine learning assisted SRAF placement can achieve a superior process window compared to the SGM model-based SRAF method, while keeping the full-chip runtime affordable, and maintain consistency of SRAF placement . We describe the current status of this machine learning assisted SRAF technique and demonstrate its application to full chip mask synthesis and discuss how it can extend the computational lithography roadmap.

  13. Using Geographic Information Systems to Determine Site Suitability for a Low-Level Radioactive Waste Storage Facility.

    PubMed

    Wilson, Charles A; Matthews, Kennith; Pulsipher, Allan; Wang, Wei-Hsung

    2016-02-01

    Radioactive waste is an inevitable product of using radioactive material in education and research activities, medical applications, energy generation, and weapons production. Low-level radioactive waste (LLW) makes up a majority of the radioactive waste produced in the United States. In 2010, over two million cubic feet of LLW were shipped to disposal sites. Despite efforts from several states and compacts as well as from private industry, the options for proper disposal of LLW remain limited. New methods for quickly identifying potential storage locations could alleviate current challenges and eventually provide additional sites and allow for adequate regional disposal of LLW. Furthermore, these methods need to be designed so that they are easily communicated to the public. A Geographic Information Systems (GIS) based method was developed to determine suitability of potential LLW disposal (or storage) sites. Criteria and other parameters of suitability were based on the Code of Federal Regulation (CFR) requirements as well as supporting literature and reports. The resultant method was used to assess areas suitable for further evaluation as prospective disposal sites in Louisiana. Criteria were derived from the 10 minimum requirements in 10 CFR Part 61.50, the Nuclear Regulatory Commission's Regulatory Guide 0902, and studies at existing disposal sites. A suitability formula was developed permitting the use of weighting factors and normalization of all criteria. Data were compiled into GIS data sets and analyzed on a cell grid of approximately 14,000 cells (covering 181,300 square kilometers) using the suitability formula. Requirements were analyzed for each cell using multiple criteria/sub-criteria as well as surrogates for unavailable datasets. Additional criteria were also added when appropriate. The method designed in this project proved to be sufficient for initial screening tests in determining the most suitable areas for prospective disposal (or storage) sites. Cells above 90%, 95%, and 99% suitability include respectively 404, 88, and 4 cells suitable for further analysis. With these areas identified, the next step in siting a LLW storage facility would be on-site analysis using additional requirements as specified by relevant regulatory guidelines. The GIS based method provides an easy, economic, efficient and effective means in evaluating potential sites for LLW storage facilities where sufficient GIS data exist.

  14. Summary of methods for calculating dynamic lateral stability and response and for estimating aerodynamic stability derivatives

    NASA Technical Reports Server (NTRS)

    Campbell, John P; Mckinney, Marion O

    1952-01-01

    A summary of methods for making dynamic lateral stability and response calculations and for estimating the aerodynamic stability derivatives required for use in these calculations is presented. The processes of performing calculations of the time histories of lateral motions, of the period and damping of these motions, and of the lateral stability boundaries are presented as a series of simple straightforward steps. Existing methods for estimating the stability derivatives are summarized and, in some cases, simple new empirical formulas are presented. Detailed estimation methods are presented for low-subsonic-speed conditions but only a brief discussion and a list of references are given for transonic and supersonic speed conditions.

  15. Metal artifact reduction for CT-based luggage screening.

    PubMed

    Karimi, Seemeen; Martz, Harry; Cosman, Pamela

    2015-01-01

    In aviation security, checked luggage is screened by computed tomography scanning. Metal objects in the bags create artifacts that degrade image quality. Though there exist metal artifact reduction (MAR) methods mainly in medical imaging literature, they require knowledge of the materials in the scan, or are outlier rejection methods. To improve and evaluate a MAR method we previously introduced, that does not require knowledge of the materials in the scan, and gives good results on data with large quantities and different kinds of metal. We describe in detail an optimization which de-emphasizes metal projections and has a constraint for beam hardening and scatter. This method isolates and reduces artifacts in an intermediate image, which is then fed to a previously published sinogram replacement method. We evaluate the algorithm for luggage data containing multiple and large metal objects. We define measures of artifact reduction, and compare this method against others in MAR literature. Metal artifacts were reduced in our test images, even for multiple and large metal objects, without much loss of structure or resolution. Our MAR method outperforms the methods with which we compared it. Our approach does not make assumptions about image content, nor does it discard metal projections.

  16. A Bayesian taxonomic classification method for 16S rRNA gene sequences with improved species-level accuracy.

    PubMed

    Gao, Xiang; Lin, Huaiying; Revanna, Kashi; Dong, Qunfeng

    2017-05-10

    Species-level classification for 16S rRNA gene sequences remains a serious challenge for microbiome researchers, because existing taxonomic classification tools for 16S rRNA gene sequences either do not provide species-level classification, or their classification results are unreliable. The unreliable results are due to the limitations in the existing methods which either lack solid probabilistic-based criteria to evaluate the confidence of their taxonomic assignments, or use nucleotide k-mer frequency as the proxy for sequence similarity measurement. We have developed a method that shows significantly improved species-level classification results over existing methods. Our method calculates true sequence similarity between query sequences and database hits using pairwise sequence alignment. Taxonomic classifications are assigned from the species to the phylum levels based on the lowest common ancestors of multiple database hits for each query sequence, and further classification reliabilities are evaluated by bootstrap confidence scores. The novelty of our method is that the contribution of each database hit to the taxonomic assignment of the query sequence is weighted by a Bayesian posterior probability based upon the degree of sequence similarity of the database hit to the query sequence. Our method does not need any training datasets specific for different taxonomic groups. Instead only a reference database is required for aligning to the query sequences, making our method easily applicable for different regions of the 16S rRNA gene or other phylogenetic marker genes. Reliable species-level classification for 16S rRNA or other phylogenetic marker genes is critical for microbiome research. Our software shows significantly higher classification accuracy than the existing tools and we provide probabilistic-based confidence scores to evaluate the reliability of our taxonomic classification assignments based on multiple database matches to query sequences. Despite its higher computational costs, our method is still suitable for analyzing large-scale microbiome datasets for practical purposes. Furthermore, our method can be applied for taxonomic classification of any phylogenetic marker gene sequences. Our software, called BLCA, is freely available at https://github.com/qunfengdong/BLCA .

  17. Local tolerance testing under REACH: Accepted non-animal methods are not on equal footing with animal tests.

    PubMed

    Sauer, Ursula G; Hill, Erin H; Curren, Rodger D; Raabe, Hans A; Kolle, Susanne N; Teubner, Wera; Mehling, Annette; Landsiedel, Robert

    2016-07-01

    In general, no single non-animal method can cover the complexity of any given animal test. Therefore, fixed sets of in vitro (and in chemico) methods have been combined into testing strategies for skin and eye irritation and skin sensitisation testing, with pre-defined prediction models for substance classification. Many of these methods have been adopted as OECD test guidelines. Various testing strategies have been successfully validated in extensive in-house and inter-laboratory studies, but they have not yet received formal acceptance for substance classification. Therefore, under the European REACH Regulation, data from testing strategies can, in general, only be used in so-called weight-of-evidence approaches. While animal testing data generated under the specific REACH information requirements are per se sufficient, the sufficiency of weight-of-evidence approaches can be questioned under the REACH system, and further animal testing can be required. This constitutes an imbalance between the regulatory acceptance of data from approved non-animal methods and animal tests that is not justified on scientific grounds. To ensure that testing strategies for local tolerance testing truly serve to replace animal testing for the REACH registration 2018 deadline (when the majority of existing chemicals have to be registered), clarity on their regulatory acceptance as complete replacements is urgently required. 2016 FRAME.

  18. Space Suit Joint Torque Testing

    NASA Technical Reports Server (NTRS)

    Valish, Dana J.

    2011-01-01

    In 2009 and early 2010, a test was performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design meets the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future space suits. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis and a variance in torque values for some of the tested joints was apparent. Potential variables that could have affected the data were identified and re-testing was conducted in an attempt to eliminate these variables. The results of the retest will be used to determine if further testing and modification is necessary before the method can be validated.

  19. Patent disclosure requirements for therapeutic antibody patents.

    PubMed

    De Luca, Carmela; Trifonova, Anastassia

    2017-08-01

    Therapeutic antibodies have grown to become an important product class within the biopharmaceutical market. A prerequisite to their commercialization is adequate patent protection. Disclosure requirements and the types of claims available in different jurisdictions can impact the scope of protection available for antibodies. Areas covered: A comparative review of statutory bases, patent office practices and selected decisions in Canada, the United States and the United Kingdom related to disclosure requirements is provided. Expert opinion: Differences in disclosure requirements exist in different jurisdictions which can impact the type of claims obtained and their survival when attacked in litigation. Including a wide variety of claim types is a key strategy to ensuring therapeutic antibodies are adequately protected. Method of use claims may provide advantages and broader protection in some circumstances and should also be considered.

  20. Design considerations for divers' breathing gas systems

    NASA Technical Reports Server (NTRS)

    Hansen, O. R.

    1972-01-01

    Some of the design methods used to establish the gas storage, mixing, and transfer requirements for existing deep dive systems are discussed. Gas mixing systems appear essential to provide the low oxygen concentration mixtures within the converging tolerance range dictated by applications to increasing depths. Time related use of gas together with the performance of the gas transfer system insures a reasonable time frame for systems application.

  1. Thermal discharges and their role in pending power plant regulatory decisions

    NASA Technical Reports Server (NTRS)

    Miller, M. H.

    1978-01-01

    Federal and state laws require the imminent retrofit of offstream condenser cooling to the newer steam electric stations. Waiver can be granted based on sound experimental data, demonstrating that existing once-through cooling will not adversely affect aquatic ecosystems. Conventional methods for monitoring thermal plumes, and some remote sensing alternatives, are reviewed, using on going work at one Maryland power plant for illustration.

  2. Technology Overview for Advanced Aircraft Armament System Program.

    DTIC Science & Technology

    1981-05-01

    availability of methods or systems for improving stores and armament safety. Of particular importance are aspects of safety involving hazards analysis ...flutter virtually insensitive to inertia and center-of- gravity location of store - Simplifies and reduces analysis and testing required to flutter- clear...status. Nearly every existing reliability analysis and discipline that prom- ised a positive return on reliability performance was drawn out, dusted

  3. Vaporous Decontamination Methods: Potential Uses and Research Priorities for Chemical and Biological Contamination Control

    DTIC Science & Technology

    2006-06-01

    Decontamination assessment of Bacillus anthracis, Bacillus subtilis, and Geobacillus stearothermophilus spores on indoor surfaces using a hydrogen...resistant to commonly used disinfectants and require the use of chemical sterilants † to effectively decontaminate exposed areas. Since anthrax...spores can aerosolise the use of vaporous sterilants in the remediation of contaminated areas is desirable. A number of vaporous sterilants exist which

  4. Best practices for assessing culvert health and determining appropriate rehabilitation methods : a research project in support of operational requirements for the South Carolina Department of Transportation : final report.

    DOT National Transportation Integrated Search

    2016-11-01

    Due to the invisibility of buried culverts from the surface, they often get ignored until a : problem such as road settlement or flooding arises. Many of the existing culverts in the US : are in a deteriorated state having reached the end of their us...

  5. Molecular opacities for exoplanets.

    PubMed

    Bernath, Peter F

    2014-04-28

    Spectroscopic observations of exoplanets are now possible by transit methods and direct emission. Spectroscopic requirements for exoplanets are reviewed based on existing measurements and model predictions for hot Jupiters and super-Earths. Molecular opacities needed to simulate astronomical observations can be obtained from laboratory measurements, ab initio calculations or a combination of the two approaches. This discussion article focuses mainly on laboratory measurements of hot molecules as needed for exoplanet spectroscopy.

  6. Density estimation in wildlife surveys

    USGS Publications Warehouse

    Bart, Jonathan; Droege, Sam; Geissler, Paul E.; Peterjohn, Bruce G.; Ralph, C. John

    2004-01-01

    Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a cost-effective component of valid wildlife monitoring but that double-sampling or another procedure that corrects for bias or establishes bounds on bias is essential. The common assertion that index methods require constant detection rates for trend estimation is mathematically incorrect; the requirement is no long-term trend in detection "ratios" (index result/parameter of interest), a requirement that is probably approximately met by many well-designed index surveys. We urge that more attention be given to defining bird density rigorously and in ways useful to managers. Once this is done, 4 sources of bias in density estimates may be distinguished: coverage, closure, surplus birds, and detection rates. Distance, double-observer, and removal methods do not reduce bias due to coverage, closure, or surplus birds. These methods may yield unbiased estimates of the number of birds present at the time of the survey, but only if their required assumptions are met, which we doubt occurs very often in practice. Double-sampling, in contrast, produces unbiased density estimates if the plots are randomly selected and estimates on the intensive surveys are unbiased. More work is needed, however, to determine the feasibility of double-sampling in different populations and habitats. We believe the tension that has developed over appropriate survey methods can best be resolved through increased appreciation of the mathematical aspects of indices, especially the effects of bias, and through studies in which candidate methods are evaluated against known numbers determined through intensive surveys.

  7. D Reconstruction from Multi-View Medical X-Ray Images - Review and Evaluation of Existing Methods

    NASA Astrophysics Data System (ADS)

    Hosseinian, S.; Arefi, H.

    2015-12-01

    The 3D concept is extremely important in clinical studies of human body. Accurate 3D models of bony structures are currently required in clinical routine for diagnosis, patient follow-up, surgical planning, computer assisted surgery and biomechanical applications. However, 3D conventional medical imaging techniques such as computed tomography (CT) scan and magnetic resonance imaging (MRI) have serious limitations such as using in non-weight-bearing positions, costs and high radiation dose(for CT). Therefore, 3D reconstruction methods from biplanar X-ray images have been taken into consideration as reliable alternative methods in order to achieve accurate 3D models with low dose radiation in weight-bearing positions. Different methods have been offered for 3D reconstruction from X-ray images using photogrammetry which should be assessed. In this paper, after demonstrating the principles of 3D reconstruction from X-ray images, different existing methods of 3D reconstruction of bony structures from radiographs are classified and evaluated with various metrics and their advantages and disadvantages are mentioned. Finally, a comparison has been done on the presented methods with respect to several metrics such as accuracy, reconstruction time and their applications. With regards to the research, each method has several advantages and disadvantages which should be considered for a specific application.

  8. Evaluating Approaches to a Coupled Model for Arctic Coastal Erosion, Infrastructure Risk, and Associated Coastal Hazards

    NASA Astrophysics Data System (ADS)

    Frederick, J. M.; Bull, D. L.; Jones, C.; Roberts, J.; Thomas, M. A.

    2016-12-01

    Arctic coastlines are receding at accelerated rates, putting existing and future activities in the developing coastal Arctic environment at extreme risk. For example, at Oliktok Long Range Radar Site, erosion that was not expected until 2040 was reached as of 2014 (Alaska Public Media). As the Arctic Ocean becomes increasingly ice-free, rates of coastal erosion will likely continue to increase as (a) increased ice-free waters generate larger waves, (b) sea levels rise, and (c) coastal permafrost soils warm and lose strength/cohesion. Due to the complex and rapidly varying nature of the Arctic region, little is known about the increasing waves, changing circulation, permafrost soil degradation, and the response of the coastline to changes in these combined conditions. However, as scientific focus has been shifting towards the polar regions, Arctic science is rapidly advancing, increasing our understanding of complex Arctic processes. Our present understanding allows us to begin to develop and evaluate the coupled models necessary for the prediction of coastal erosion in support of Arctic risk assessments. What are the best steps towards the development of a coupled model for Arctic coastal erosion? This work focuses on our current understanding of Arctic conditions and identifying the tools and methods required to develop an integrated framework capable of accurately predicting Arctic coastline erosion and assessing coastal risk and hazards. We will present a summary of the state-of-the-science, and identify existing tools and methods required to develop an integrated diagnostic and monitoring framework capable of accurately predicting and assessing Arctic coastline erosion, infrastructure risk, and coastal hazards. The summary will describe the key coastal processes to simulate, appropriate models to use, effective methods to couple existing models, and identify gaps in knowledge that require further attention to make progress in our understanding of Arctic coastal erosion. * Co-authors listed in alphabetical order. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  9. Report of Workshop on Methodology for Evaluating Potential Lunar Resources Sites

    NASA Technical Reports Server (NTRS)

    Williams, R. J.; Hubbard, N.

    1981-01-01

    The type and quantity of lunar materials needed to support a space power satellite program was used to define the type and quality of geological information required to certify a site for exploitation. The existing geological, geochemical, and geophysical data are summarized. The difference between these data and the required data for exploitation is used to define program requirements. Most of these requirements involve linear extensions of existing capabilities, fuller utilization of existing data, or expanded use of automated systems.

  10. Development of thermal control methods for specialized components and scientific instruments at very low temperatures (follow-on)

    NASA Technical Reports Server (NTRS)

    Wright, J. P.; Wilson, D. E.

    1976-01-01

    Many payloads currently proposed to be flown by the space shuttle system require long-duration cooling in the 3 to 200 K temperature range. Common requirements also exist for certain DOD payloads. Parametric design and optimization studies are reported for multistage and diode heat pipe radiator systems designed to operate in this temperature range. Also optimized are ground test systems for two long-life passive thermal control concepts operating under specified space environmental conditions. The ground test systems evaluated are ultimately intended to evolve into flight test qualification prototypes for early shuttle flights.

  11. Workmanship Challenges for NASA Mission Hardware

    NASA Technical Reports Server (NTRS)

    Plante, Jeannette

    2010-01-01

    This slide presentation reviews several challenges in workmanship for NASA mission hardware development. Several standards for NASA workmanship exist, that are required for all programs, projects, contracts and subcontracts. These Standards contain our best known methods for avoiding past assembly problems and defects. These best practices may not be available if suppliers are used who are not compliant with them. Compliance includes having certified operators and inspectors. Some examples of problems that have occured from the lack of requirements flow-down to contractors are reviewed. The presentation contains a detailed example of the challenge in regards to The Packaging "Design" Dilemma.

  12. Diabat Interpolation for Polymorph Free-Energy Differences.

    PubMed

    Kamat, Kartik; Peters, Baron

    2017-02-02

    Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.

  13. MFAHP: A novel method on the performance evaluation of the industrial wireless networked control system

    NASA Astrophysics Data System (ADS)

    Wu, Linqin; Xu, Sheng; Jiang, Dezhi

    2015-12-01

    Industrial wireless networked control system has been widely used, and how to evaluate the performance of the wireless network is of great significance. In this paper, considering the shortcoming of the existing performance evaluation methods, a comprehensive performance evaluation method of networks multi-indexes fuzzy analytic hierarchy process (MFAHP) combined with the fuzzy mathematics and the traditional analytic hierarchy process (AHP) is presented. The method can overcome that the performance evaluation is not comprehensive and subjective. Experiments show that the method can reflect the network performance of real condition. It has direct guiding role on protocol selection, network cabling, and node setting, and can meet the requirements of different occasions by modifying the underlying parameters.

  14. Laboratory test methods for combustion stability properties of solid propellants

    NASA Technical Reports Server (NTRS)

    Strand, L. D.; Brown, R. S.

    1992-01-01

    An overview is presented of experimental methods for determining the combustion-stability properties of solid propellants. The methods are generally based on either the temporal response to an initial disturbance or on external methods for generating the required oscillations. The size distribution of condensed-phase combustion products are characterized by means of the experimental approaches. The 'T-burner' approach is shown to assist in the derivation of pressure-coupled driving contributions and particle damping in solid-propellant rocket motors. Other techniques examined include the rotating-valve apparatus, the impedance tube, the modulated throat-acoustic damping burner, and the magnetic flowmeter. The paper shows that experimental methods do not exist for measuring the interactions between acoustic velocity oscillations and burning propellant.

  15. Prediction of anthropometric accommodation in aircraft cockpits

    NASA Astrophysics Data System (ADS)

    Zehner, Gregory Franklin

    Designing aircraft cockpits to accommodate the wide range of body sizes existing in the U.S. population has always been a difficult problem for Crewstation Engineers. The approach taken in the design of military aircraft has been to restrict the range of body sizes allowed into flight training, and then to develop standards and specifications to ensure that the majority of the pilots are accommodated. Accommodation in this instance is defined as the ability to: (1) Adequately see, reach, and actuate controls; (2) Have external visual fields so that the pilot can see to land, clear for other aircraft, and perform a wide variety of missions (ground support/attack or air to air combat); and (3) Finally, if problems arise, the pilot has to be able to escape safely. Each of these areas is directly affected by the body size of the pilot. Unfortunately, accommodation problems persist and may get worse. Currently the USAF is considering relaxing body size entrance requirements so that smaller and larger people could become pilots. This will make existing accommodation problems much worse. This dissertation describes a methodology for correcting this problem and demonstrates the method by predicting pilot fit and performance in the USAF T-38A aircraft based on anthropometric data. The methods described can be applied to a variety of design applications where fitting the human operator into a system is a major concern. A systematic approach is described which includes: defining the user population, setting functional requirements that operators must be able to perform, testing the ability of the user population to perform the functional requirements, and developing predictive equations for selecting future users of the system. Also described is a process for the development of new anthropometric design criteria and cockpit design methods that assure body size accommodation is improved in the future.

  16. Combining existing numerical models with data assimilation using weighted least-squares finite element methods.

    PubMed

    Rajaraman, Prathish K; Manteuffel, T A; Belohlavek, M; Heys, Jeffrey J

    2017-01-01

    A new approach has been developed for combining and enhancing the results from an existing computational fluid dynamics model with experimental data using the weighted least-squares finite element method (WLSFEM). Development of the approach was motivated by the existence of both limited experimental blood velocity in the left ventricle and inexact numerical models of the same flow. Limitations of the experimental data include measurement noise and having data only along a two-dimensional plane. Most numerical modeling approaches do not provide the flexibility to assimilate noisy experimental data. We previously developed an approach that could assimilate experimental data into the process of numerically solving the Navier-Stokes equations, but the approach was limited because it required the use of specific finite element methods for solving all model equations and did not support alternative numerical approximation methods. The new approach presented here allows virtually any numerical method to be used for approximately solving the Navier-Stokes equations, and then the WLSFEM is used to combine the experimental data with the numerical solution of the model equations in a final step. The approach dynamically adjusts the influence of the experimental data on the numerical solution so that more accurate data are more closely matched by the final solution and less accurate data are not closely matched. The new approach is demonstrated on different test problems and provides significantly reduced computational costs compared with many previous methods for data assimilation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Evaluating business value of IT towards optimisation of the application portfolio

    NASA Astrophysics Data System (ADS)

    Sun, Lily; Liu, Kecheng; Indrayani Jambari, Dian; Michell, Vaughan

    2016-05-01

    Information technology has become heavily embedded in business operations. As business needs change over time, IT applications are expected to continue providing required support. Whether the existing IT applications are still fit for the business purpose they were intended or new IT applications should be introduced is a strategic decision for business, IT and business-aligned IT. In this article, we present a method that aims to analyse business functions and IT roles and to evaluate business-aligned IT from both social and technical perspectives. The method introduces a set of techniques that systematically supports the evaluation of the existing IT applications in relation to their technical capabilities for maximising business value. Furthermore, we discuss the evaluation process and results that are illustrated and validated through a real-life case study of a UK borough council and followed by discussion on implications for researchers and practitioners.

  18. Benzene construction via organocatalytic formal [3+3] cycloaddition reaction.

    PubMed

    Zhu, Tingshun; Zheng, Pengcheng; Mou, Chengli; Yang, Song; Song, Bao-An; Chi, Yonggui Robin

    2014-09-25

    The benzene unit, in its substituted forms, is a most common scaffold in natural products, bioactive molecules and polymer materials. Nearly 80% of the 200 best selling small molecule drugs contain at least one benzene moiety. Not surprisingly, the synthesis of substituted benzenes receives constant attentions. At present, the dominant methods use pre-existing benzene framework to install substituents by using conventional functional group manipulations or transition metal-catalyzed carbon-hydrogen bond activations. These otherwise impressive approaches require multiple synthetic steps and are ineffective from both economic and environmental perspectives. Here we report an efficient method for the synthesis of substituted benzene molecules. Instead of relying on pre-existing aromatic rings, here we construct the benzene core through a carbene-catalyzed formal [3+3] reaction. Given the simplicity and high efficiency, we expect this strategy to be of wide use especially for large scale preparation of biomedicals and functional materials.

  19. Certification of medical librarians, 1949--1977 statistical analysis.

    PubMed

    Schmidt, D

    1979-01-01

    The Medical Library Association's Code for Training and Certification of Medical Librarians was in effect from 1949 to August 1977, a period during which 3,216 individuals were certified. Statistics on each type of certificate granted each year are provided. Because 54.5% of those granted certification were awarded it in the last three-year, two-month period of the code's existence, these applications are reviewed in greater detail. Statistics on each type of certificate granted each year are provided. Because 54.5% of those granted certification were awarded it in the last three-year, two-month period of the code's existence, these applications are reviewed in greater detail. Statistics on MLA membership, sex, residence, library school, and method of meeting requirements are detailed. Questions relating to certification under the code now in existence are raised.

  20. Certification of medical librarians, 1949--1977 statistical analysis.

    PubMed Central

    Schmidt, D

    1979-01-01

    The Medical Library Association's Code for Training and Certification of Medical Librarians was in effect from 1949 to August 1977, a period during which 3,216 individuals were certified. Statistics on each type of certificate granted each year are provided. Because 54.5% of those granted certification were awarded it in the last three-year, two-month period of the code's existence, these applications are reviewed in greater detail. Statistics on each type of certificate granted each year are provided. Because 54.5% of those granted certification were awarded it in the last three-year, two-month period of the code's existence, these applications are reviewed in greater detail. Statistics on MLA membership, sex, residence, library school, and method of meeting requirements are detailed. Questions relating to certification under the code now in existence are raised. PMID:427287

  1. Methods of 14СО2, 13СО2 and 12СО2 detection in gaseous media in real time

    NASA Astrophysics Data System (ADS)

    Kireev, S. V.; Kondrashov, A. A.; Shnyrev, S. L.; Simanovsky, I. G.

    2017-10-01

    A comparative analytical review of the existing methods and techniques for measuring 13СО2 and 14СО2 mixed with 12СО2 in gases is provided. It shows that one of the most promising approaches is the method of infrared laser spectroscopy using frequency-tunable diode laser operating near the wavelengths of 4.3 or 2 µm. Measuring near the wavelength of 4.3 µm provides the most accurate results for 13СО2 and 14СО2, but requires more expensive equipment and has complex operation.

  2. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding

    PubMed Central

    2013-01-01

    Background In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. Results The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. Conclusions The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies. PMID:24314298

  3. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.

    PubMed

    Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter

    2013-12-06

    In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies.

  4. Ancillary study management systems: a review of needs

    PubMed Central

    2013-01-01

    Background The valuable clinical data, specimens, and assay results collected during a primary clinical trial or observational study can enable researchers to answer additional, pressing questions with relatively small investments in new measurements. However, management of such follow-on, “ancillary” studies is complex. It requires coordinating across institutions, sites, repositories, and approval boards, as well as distributing, integrating, and analyzing diverse data types. General-purpose software systems that simplify the management of ancillary studies have not yet been explored in the research literature. Methods We have identified requirements for ancillary study management primarily as part of our ongoing work with a number of large research consortia. These organizations include the Center for HIV/AIDS Vaccine Immunology (CHAVI), the Immune Tolerance Network (ITN), the HIV Vaccine Trials Network (HVTN), the U.S. Military HIV Research Program (MHRP), and the Network for Pancreatic Organ Donors with Diabetes (nPOD). We also consulted with researchers at a range of other disease research organizations regarding their workflows and data management strategies. Lastly, to enhance breadth, we reviewed process documents for ancillary study management from other organizations. Results By exploring characteristics of ancillary studies, we identify differentiating requirements and scenarios for ancillary study management systems (ASMSs). Distinguishing characteristics of ancillary studies may include the collection of additional measurements (particularly new analyses of existing specimens); the initiation of studies by investigators unaffiliated with the original study; cross-protocol data pooling and analysis; pre-existing participant consent; and pre-existing data context and provenance. For an ASMS to address these characteristics, it would need to address both operational requirements (e.g., allocating existing specimens) and data management requirements (e.g., securely distributing and integrating primary and ancillary data). Conclusions The scenarios and requirements we describe can help guide the development of systems that make conducting ancillary studies easier, less expensive, and less error-prone. Given the relatively consistent characteristics and challenges of ancillary study management, general-purpose ASMSs are likely to be useful to a wide range of organizations. Using the requirements identified in this paper, we are currently developing an open-source, general-purpose ASMS based on LabKey Server (http://www.labkey.org) in collaboration with CHAVI, the ITN and nPOD. PMID:23294514

  5. Faraday rotation data analysis with least-squares elliptical fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Adam D.; McHale, G. Brent; Goerz, David A.

    2010-10-15

    A method of analyzing Faraday rotation data from pulsed magnetic field measurements is described. The method uses direct least-squares elliptical fitting to measured data. The least-squares fit conic parameters are used to rotate, translate, and rescale the measured data. Interpretation of the transformed data provides improved accuracy and time-resolution characteristics compared with many existing methods of analyzing Faraday rotation data. The method is especially useful when linear birefringence is present at the input or output of the sensing medium, or when the relative angle of the polarizers used in analysis is not aligned with precision; under these circumstances the methodmore » is shown to return the analytically correct input signal. The method may be pertinent to other applications where analysis of Lissajous figures is required, such as the velocity interferometer system for any reflector (VISAR) diagnostics. The entire algorithm is fully automated and requires no user interaction. An example of algorithm execution is shown, using data from a fiber-based Faraday rotation sensor on a capacitive discharge experiment.« less

  6. Laser tracker orientation in confined space using on-board targets

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Kyle, Stephen; Lin, Jiarui; Yang, Linghui; Ren, Yu; Zhu, Jigui

    2016-08-01

    This paper presents a novel orientation method for two laser trackers using on-board targets attached to the tracker head and rotating with it. The technique extends an existing method developed for theodolite intersection systems which are now rarely used. This method requires only a very narrow space along the baseline between the instrument heads, in order to establish the orientation relationship. This has potential application in environments where space is restricted. The orientation parameters can be calculated by means of two-face reciprocal measurements to the on-board targets, and measurements to a common point close to the baseline. An accurate model is then applied which can be solved through nonlinear optimization. Experimental comparison has been made with the conventional orientation method, which is based on measurements to common intersection points located off the baseline. This requires more space and the comparison has demonstrated the feasibility of the more compact technique presented here. Physical setup and testing suggest that the method is practical. Uncertainties estimated by simulation indicate good performance in terms of measurement quality.

  7. Data requirements for valuing externalities: The role of existing permitting processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, A.D.; Baechler, M.C.; Callaway, J.M.

    1990-08-01

    While the assessment of externalities, or residual impacts, will place new demands on regulators, utilities, and developers, existing processes already require certain data and information that may fulfill some of the data needs for externality valuation. This paper examines existing siting, permitting, and other processes and highlights similarities and differences between their data requirements and the data required to value environmental externalities. It specifically considers existing requirements for siting new electricity resources in Oregon and compares them with the information and data needed to value externalities for such resources. This paper also presents several observations about how states can takemore » advantage of data acquired through processes already in place as they move into an era when externalities are considered in utility decision-making. It presents other observations on the similarities and differences between the data requirements under existing processes and those for valuing externalities. This paper also briefly discusses the special case of cumulative impacts. And it presents recommendations on what steps to take in future efforts to value externalities. 35 refs., 2 tabs.« less

  8. Estimation of crop water requirements using remote sensing for operational water resources management

    NASA Astrophysics Data System (ADS)

    Vasiliades, Lampros; Spiliotopoulos, Marios; Tzabiras, John; Loukas, Athanasios; Mylopoulos, Nikitas

    2015-06-01

    An integrated modeling system, developed in the framework of "Hydromentor" research project, is applied to evaluate crop water requirements for operational water resources management at Lake Karla watershed, Greece. The framework includes coupled components for operation of hydrotechnical projects (reservoir operation and irrigation works) and estimation of agricultural water demands at several spatial scales using remote sensing. The study area was sub-divided into irrigation zones based on land use maps derived from Landsat 5 TM images for the year 2007. Satellite-based energy balance for mapping evapotranspiration with internalized calibration (METRIC) was used to derive actual evapotranspiration (ET) and crop coefficient (ETrF) values from Landsat TM imagery. Agricultural water needs were estimated using the FAO method for each zone and each control node of the system for a number of water resources management strategies. Two operational strategies of hydro-technical project development (present situation without operation of the reservoir and future situation with the operation of the reservoir) are coupled with three water demand strategies. In total, eight (8) water management strategies are evaluated and compared. The results show that, under the existing operational water resources management strategies, the crop water requirements are quite large. However, the operation of the proposed hydro-technical projects in Lake Karla watershed coupled with water demand management measures, like improvement of existing water distribution systems, change of irrigation methods, and changes of crop cultivation could alleviate the problem and lead to sustainable and ecological use of water resources in the study area.

  9. 13 CFR 120.200 - What bonding requirements exist during construction?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false What bonding requirements exist during construction? 120.200 Section 120.200 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Policies Specific to 7(a) Loans Bonding Requirements § 120.200 What bonding requirements...

  10. 13 CFR 120.200 - What bonding requirements exist during construction?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false What bonding requirements exist during construction? 120.200 Section 120.200 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Policies Specific to 7(a) Loans Bonding Requirements § 120.200 What bonding requirements...

  11. 13 CFR 120.200 - What bonding requirements exist during construction?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false What bonding requirements exist during construction? 120.200 Section 120.200 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Policies Specific to 7(a) Loans Bonding Requirements § 120.200 What bonding requirements...

  12. 13 CFR 120.200 - What bonding requirements exist during construction?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false What bonding requirements exist during construction? 120.200 Section 120.200 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Policies Specific to 7(a) Loans Bonding Requirements § 120.200 What bonding requirements...

  13. 78 FR 69606 - Record Requirements in the Mechanical Power Presses Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-20

    ... press is implicit in the requirement in existing paragraph (e)(1)(i), which specifies that the employer... believes that adding an explicit requirement to perform necessary maintenance and repair will ensure that... weekly inspections and tests required by existing paragraph (e)(1)(ii) serve the following functions: (i...

  14. A Tool for the Automated Collection of Space Utilization Data: Three Dimensional Space Utilization Monitor

    NASA Technical Reports Server (NTRS)

    Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.

    2017-01-01

    Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP) and the Behavioral Health and Performance (BHP) Element are conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within the volume. NASA needs methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods exist yet many are obtrusive and require significant post-processing. ?Examplesused in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multi-camera methods ?Due to constraints of space operations many such methods are infeasible. Inertial tracking systems typically rely upon a gravity vector to normalize sensor readings,and traditional IR systems are large and require extensive calibration. ?However, multiple technologies have not been applied to space operations for these purposes. Two of these include: 3D Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) ?Depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR)

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buttner, William; Rivkin, Carl; Burgess, Robert

    The United Nations Global Technical Regulation (GTR) Number 13 (Global Technical Regulation on Hydrogen and Fuel Cell Vehicles) is the defining document regulating safety requirements in hydrogen vehicles, and in particular fuel cell electric vehicles (FCEV). GTR Number 13 has been formally implemented and will serve as the basis for the national regulatory standards for FCEV safety in North America (Canada, United States), Japan, Korea, and the European Union. The GTR defines safety requirement for these vehicles, including specifications on the allowable hydrogen levels in vehicle enclosures during in-use and post-crash conditions and on the allowable hydrogen emissions levels inmore » vehicle exhaust during certain modes of normal operation. However, in order to be incorporated into national regulations, that is, in order to be binding, methods to verify compliance to the specific requirements must exist. In a collaborative program, the Sensor Laboratories at the National Renewable Energy Laboratory in the United States and the Joint Research Centre, Institute for Energy and Transport in the Netherlands have been evaluating and developing analytical methods that can be used to verify compliance to the hydrogen release requirement as specified in the GTR.« less

  16. A Novel Calibration-Minimum Method for Prediction of Mole Fraction in Non-Ideal Mixture.

    PubMed

    Shibayama, Shojiro; Kaneko, Hiromasa; Funatsu, Kimito

    2017-04-01

    This article proposes a novel concentration prediction model that requires little training data and is useful for rapid process understanding. Process analytical technology is currently popular, especially in the pharmaceutical industry, for enhancement of process understanding and process control. A calibration-free method, iterative optimization technology (IOT), was proposed to predict pure component concentrations, because calibration methods such as partial least squares, require a large number of training samples, leading to high costs. However, IOT cannot be applied to concentration prediction in non-ideal mixtures because its basic equation is derived from the Beer-Lambert law, which cannot be applied to non-ideal mixtures. We proposed a novel method that realizes prediction of pure component concentrations in mixtures from a small number of training samples, assuming that spectral changes arising from molecular interactions can be expressed as a function of concentration. The proposed method is named IOT with virtual molecular interaction spectra (IOT-VIS) because the method takes spectral change as a virtual spectrum x nonlin,i into account. It was confirmed through the two case studies that the predictive accuracy of IOT-VIS was the highest among existing IOT methods.

  17. SimBA: simulation algorithm to fit extant-population distributions.

    PubMed

    Parida, Laxmi; Haiminen, Niina

    2015-03-14

    Simulation of populations with specified characteristics such as allele frequencies, linkage disequilibrium etc., is an integral component of many studies, including in-silico breeding optimization. Since the accuracy and sensitivity of population simulation is critical to the quality of the output of the applications that use them, accurate algorithms are required to provide a strong foundation to the methods in these studies. In this paper we present SimBA (Simulation using Best-fit Algorithm) a non-generative approach, based on a combination of stochastic techniques and discrete methods. We optimize a hill climbing algorithm and extend the framework to include multiple subpopulation structures. Additionally, we show that SimBA is very sensitive to the input specifications, i.e., very similar but distinct input characteristics result in distinct outputs with high fidelity to the specified distributions. This property of the simulation is not explicitly modeled or studied by previous methods. We show that SimBA outperforms the existing population simulation methods, both in terms of accuracy as well as time-efficiency. Not only does it construct populations that meet the input specifications more stringently than other published methods, SimBA is also easy to use. It does not require explicit parameter adaptations or calibrations. Also, it can work with input specified as distributions, without an exemplar matrix or population as required by some methods. SimBA is available at http://researcher.ibm.com/project/5669 .

  18. Space Suit Joint Torque Measurement Method Validation

    NASA Technical Reports Server (NTRS)

    Valish, Dana; Eversley, Karina

    2012-01-01

    In 2009 and early 2010, a test method was developed and performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits. This was done in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design met the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future development programs. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis; the results indicated a significant variance in values reported for a subset of the re-tested joints. Potential variables that could have affected the data were identified and a third round of testing was conducted in an attempt to eliminate and/or quantify the effects of these variables. The results of the third test effort will be used to determine whether or not the proposed joint torque methodology can be applied to future space suit development contracts.

  19. Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth

    2017-12-01

    The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.

  20. Building large area CZT imaging detectors for a wide-field hard X-ray telescope—ProtoEXIST1

    NASA Astrophysics Data System (ADS)

    Hong, J.; Allen, B.; Grindlay, J.; Chammas, N.; Barthelemy, S.; Baker, R.; Gehrels, N.; Nelson, K. E.; Labov, S.; Collins, J.; Cook, W. R.; McLean, R.; Harrison, F.

    2009-07-01

    We have constructed a moderately large area (32cm), fine pixel (2.5 mm pixel, 5 mm thick) CZT imaging detector which constitutes the first section of a detector module (256cm) developed for a balloon-borne wide-field hard X-ray telescope, ProtoEXIST1. ProtoEXIST1 is a prototype for the High Energy Telescope (HET) in the Energetic X-ray imaging Survey Telescope (EXIST), a next generation space-borne multi-wavelength telescope. We have constructed a large (nearly gapless) detector plane through a modularization scheme by tiling of a large number of 2cm×2cm CZT crystals. Our innovative packaging method is ideal for many applications such as coded-aperture imaging, where a large, continuous detector plane is desirable for the optimal performance. Currently we have been able to achieve an energy resolution of 3.2 keV (FWHM) at 59.6 keV on average, which is exceptional considering the moderate pixel size and the number of detectors in simultaneous operation. We expect to complete two modules (512cm) within the next few months as more CZT becomes available. We plan to test the performance of these detectors in a near space environment in a series of high altitude balloon flights, the first of which is scheduled for Fall 2009. These detector modules are the first in a series of progressively more sophisticated detector units and packaging schemes planned for ProtoEXIST2 & 3, which will demonstrate the technology required for the advanced CZT imaging detectors (0.6 mm pixel, 4.5m area) required in EXIST/HET.

  1. 40 CFR Table 2d to Subpart Zzzz of... - Requirements for Existing Stationary RICE Located at Area Sources of HAP Emissions

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... RICE Located at Area Sources of HAP Emissions 2d Table 2d to Subpart ZZZZ of Part 63 Protection of... 2d Table 2d to Subpart ZZZZ of Part 63—Requirements for Existing Stationary RICE Located at Area... requirements for existing stationary RICE located at area sources of HAP emissions: For each . . . You must...

  2. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    PubMed

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-07-19

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics.

  3. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    PubMed Central

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. PMID:27447631

  4. A system level model for preliminary design of a space propulsion solid rocket motor

    NASA Astrophysics Data System (ADS)

    Schumacher, Daniel M.

    Preliminary design of space propulsion solid rocket motors entails a combination of components and subsystems. Expert design tools exist to find near optimal performance of subsystems and components. Conversely, there is no system level preliminary design process for space propulsion solid rocket motors that is capable of synthesizing customer requirements into a high utility design for the customer. The preliminary design process for space propulsion solid rocket motors typically builds on existing designs and pursues feasible rather than the most favorable design. Classical optimization is an extremely challenging method when dealing with the complex behavior of an integrated system. The complexity and combinations of system configurations make the number of the design parameters that are traded off unreasonable when manual techniques are used. Existing multi-disciplinary optimization approaches generally address estimating ratios and correlations rather than utilizing mathematical models. The developed system level model utilizes the Genetic Algorithm to perform the necessary population searches to efficiently replace the human iterations required during a typical solid rocket motor preliminary design. This research augments, automates, and increases the fidelity of the existing preliminary design process for space propulsion solid rocket motors. The system level aspect of this preliminary design process, and the ability to synthesize space propulsion solid rocket motor requirements into a near optimal design, is achievable. The process of developing the motor performance estimate and the system level model of a space propulsion solid rocket motor is described in detail. The results of this research indicate that the model is valid for use and able to manage a very large number of variable inputs and constraints towards the pursuit of the best possible design.

  5. Human-Automation Allocations for Current Robotic Space Operations

    NASA Technical Reports Server (NTRS)

    Marquez, Jessica J.; Chang, Mai L.; Beard, Bettina L.; Kim, Yun Kyung; Karasinski, John A.

    2018-01-01

    Within the Human Research Program, one risk delineates the uncertainty surrounding crew working with automation and robotics in spaceflight. The Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is concerned with the detrimental effects on crew performance due to ineffective user interfaces, system designs and/or functional task allocation, potentially compromising mission success and safety. Risk arises because we have limited experience with complex automation and robotics. One key gap within HARI, is the gap related to functional allocation. The gap states: We need to evaluate, develop, and validate methods and guidelines for identifying human-automation/robot task information needs, function allocation, and team composition for future long duration, long distance space missions. Allocations determine the human-system performance as it identifies the functions and performance levels required by the automation/robotic system, and in turn, what work the crew is expected to perform and the necessary human performance requirements. Allocations must take into account each of the human, automation, and robotic systems capabilities and limitations. Some functions may be intuitively assigned to the human versus the robot, but to optimize efficiency and effectiveness, purposeful role assignments will be required. The role of automation and robotics will significantly change in future exploration missions, particularly as crew becomes more autonomous from ground controllers. Thus, we must understand the suitability of existing function allocation methods within NASA as well as the existing allocations established by the few robotic systems that are operational in spaceflight. In order to evaluate future methods of robotic allocations, we must first benchmark the allocations and allocation methods that have been used. We will present 1) documentation of human-automation-robotic allocations in existing, operational spaceflight systems; and 2) To gather existing lessons learned and best practices in these role assignments, from spaceflight operational experience of crew and ground teams that may be used to guide development for future systems. NASA and other space agencies have operational spaceflight experience with two key Human-Automation-Robotic (HAR) systems: heavy lift robotic arms and planetary robotic explorers. Additionally, NASA has invested in high-fidelity rover systems that can carry crew, building beyond Apollo's lunar rover. The heavy lift robotic arms reviewed are: Space Station Remote Manipulator System (SSRMS), Japanese Remote Manipulator System (JEMRMS), and the European Robotic Arm (ERA, designed but not deployed in space). The robotic rover systems reviewed are: Mars Exploration Rovers, Mars Science Laboratory rover, and the high-fidelity K10 rovers. Much of the design and operational feedback for these systems have been communicated to flight controllers and robotic design teams. As part of the mitigating the HARI risk for future human spaceflight operations, we must document function allocations between robots and humans that have worked well in practice.

  6. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  7. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  8. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  9. Numerical simulation on dimension decrease for annular casing of one centrifugal boiler circulation pump

    NASA Astrophysics Data System (ADS)

    Fan, Y. Z.; Zuo, Z. G.; Liu, S. H.; Wu, Y. L.; Sha, Y. J.

    2012-11-01

    Primary formulation derivation indicates that the dimension of one existing centrifugal boiler circulation pump casing is too large. As great manufacture cost can be saved by dimension decrease, a numerical simulation research is developed in this paper on dimension decrease for annular casing of this pump with a specific speed equaling to 189, which aims at finding an appropriately smaller dimension of the casing while hydraulic performance and strength performance will hardly be changed according to the requirements of the cooperative company. The research object is one existing centrifugal pump with a diffuser and a semi-spherical annular casing, working as the boiler circulation pump for (ultra) supercritical units in power plants. Dimension decrease, the modification method, is achieved by decreasing the existing casing's internal radius (marked as "Ri0") while keeping the wall thickness. The research analysis is based on primary formulation derivation, CFD (Computational Fluid Dynamics) simulation and FEM (Finite Element Method) simulation. Primary formulation derivation estimates that a design casing's internal radius should be less than 0.75 Ri0. CFD analysis indicates that smaller casing with 0.75 Ri0 has a worse hydraulic performance when working at large flow rates and a better hydraulic performance when working at small flow rates. In consideration of hydraulic performance and dimension decrease, an appropriate casing's internal radius is determined, which equals to 0.875 Ri0. FEM analysis then confirms that modified pump casing has nearly the same strength performance as the existing pump casing. It is concluded that dimension decrease can be an economical method as well as a practical method for large pumps in engineering fields.

  10. Integrated calibration of multiview phase-measuring profilometry

    NASA Astrophysics Data System (ADS)

    Lee, Yeong Beum; Kim, Min H.

    2017-11-01

    Phase-measuring profilometry (PMP) measures per-pixel height information of a surface with high accuracy. Height information captured by a camera in PMP relies on its screen coordinates. Therefore, a PMP measurement from a view cannot be integrated directly to other measurements from different views due to the intrinsic difference of the screen coordinates. In order to integrate multiple PMP scans, an auxiliary calibration of each camera's intrinsic and extrinsic properties is required, in addition to principal PMP calibration. This is cumbersome and often requires physical constraints in the system setup, and multiview PMP is consequently rarely practiced. In this work, we present a novel multiview PMP method that yields three-dimensional global coordinates directly so that three-dimensional measurements can be integrated easily. Our PMP calibration parameterizes intrinsic and extrinsic properties of the configuration of both a camera and a projector simultaneously. It also does not require any geometric constraints on the setup. In addition, we propose a novel calibration target that can remain static without requiring any mechanical operation while conducting multiview calibrations, whereas existing calibration methods require manually changing the target's position and orientation. Our results validate the accuracy of measurements and demonstrate the advantages on our multiview PMP.

  11. Hydrogen monitoring requirements in the global technical regulation on hydrogen and fuel cell vehicles

    DOE PAGES

    Buttner, William; Rivkin, C.; Burgess, R.; ...

    2017-02-04

    Here, the United Nations Economic Commission for Europe Global Technical Regulation (GTR) Number 13 ( Global Technical Regulation on Hydrogen and Fuel Cell Vehicles) is the defining document regulating safety requirements in hydrogen vehicles, and in particular, fuel cell electric vehicles (FCEVs). GTR Number 13 has been formally adopted and will serve as the basis for the national regulatory standards for FCEV safety in North America (led by the United States), Japan, Korea, and the European Union. The GTR defines safety requirements for these vehicles, including specifications on the allowable hydrogen levels in vehicle enclosures during in-use and post-crash conditionsmore » and on the allowable hydrogen emissions levels in vehicle exhaust during certain modes of normal operation. However, in order to be incorporated into national regulations, that is, to be legally binding, methods to verify compliance with the specific requirements must exist. In a collaborative program, the Sensor Laboratories at the National Renewable Energy Laboratory in the United States and the Joint Research Centre, Institute for Energy and Transport in the Netherlands have been evaluating and developing analytical methods that can be used to verify compliance with the hydrogen release requirements as specified in the GTR.« less

  12. NCI Workshop Report: Clinical and Computational Requirements for Correlating Imaging Phenotypes with Genomics Signatures.

    PubMed

    Colen, Rivka; Foster, Ian; Gatenby, Robert; Giger, Mary Ellen; Gillies, Robert; Gutman, David; Heller, Matthew; Jain, Rajan; Madabhushi, Anant; Madhavan, Subha; Napel, Sandy; Rao, Arvind; Saltz, Joel; Tatum, James; Verhaak, Roeland; Whitman, Gary

    2014-10-01

    The National Cancer Institute (NCI) Cancer Imaging Program organized two related workshops on June 26-27, 2013, entitled "Correlating Imaging Phenotypes with Genomics Signatures Research" and "Scalable Computational Resources as Required for Imaging-Genomics Decision Support Systems." The first workshop focused on clinical and scientific requirements, exploring our knowledge of phenotypic characteristics of cancer biological properties to determine whether the field is sufficiently advanced to correlate with imaging phenotypes that underpin genomics and clinical outcomes, and exploring new scientific methods to extract phenotypic features from medical images and relate them to genomics analyses. The second workshop focused on computational methods that explore informatics and computational requirements to extract phenotypic features from medical images and relate them to genomics analyses and improve the accessibility and speed of dissemination of existing NIH resources. These workshops linked clinical and scientific requirements of currently known phenotypic and genotypic cancer biology characteristics with imaging phenotypes that underpin genomics and clinical outcomes. The group generated a set of recommendations to NCI leadership and the research community that encourage and support development of the emerging radiogenomics research field to address short-and longer-term goals in cancer research.

  13. Improving Medical Device Regulation: The United States and Europe in Perspective

    PubMed Central

    SORENSON, CORINNA; DRUMMOND, MICHAEL

    2014-01-01

    Context: Recent debates and events have brought into question the effectiveness of existing regulatory frameworks for medical devices in the United States and Europe to ensure their performance, safety, and quality. This article provides a comparative analysis of medical device regulation in the two jurisdictions, explores current reforms to improve the existing systems, and discusses additional actions that should be considered to fully meet this aim. Medical device regulation must be improved to safeguard public health and ensure that high-quality and effective technologies reach patients. Methods: We explored and analyzed medical device regulatory systems in the United States and Europe in accordance with the available gray and peer-reviewed literature and legislative documents. Findings: The two regulatory systems differ in their mandate and orientation, organization, pre-and postmarket evidence requirements, and transparency of process. Despite these differences, both jurisdictions face similar challenges for ensuring that only safe and effective devices reach the market, monitoring real-world use, and exchanging pertinent information on devices with key users such as clinicians and patients. To address these issues, reforms have recently been introduced or debated in the United States and Europe that are principally focused on strengthening regulatory processes, enhancing postmarket regulation through more robust surveillance systems, and improving the traceability and monitoring of devices. Some changes in premarket requirements for devices are being considered. Conclusions: Although the current reforms address some of the outstanding challenges in device regulation, additional steps are needed to improve existing policy. We examine a number of actions to be considered, such as requiring high-quality evidence of benefit for medium-and high-risk devices; moving toward greater centralization and coordination of regulatory approval in Europe; creating links between device identifier systems and existing data collection tools, such as electronic health records; and fostering increased and more effective use of registries to ensure safe postmarket use of new and existing devices. PMID:24597558

  14. Hierarchical matrices implemented into the boundary integral approaches for gravity field modelling

    NASA Astrophysics Data System (ADS)

    Čunderlík, Róbert; Vipiana, Francesca

    2017-04-01

    Boundary integral approaches applied for gravity field modelling have been recently developed to solve the geodetic boundary value problems numerically, or to process satellite observations, e.g. from the GOCE satellite mission. In order to obtain numerical solutions of "cm-level" accuracy, such approaches require very refined level of the disretization or resolution. This leads to enormous memory requirements that need to be reduced. An implementation of the Hierarchical Matrices (H-matrices) can significantly reduce a numerical complexity of these approaches. A main idea of the H-matrices is based on an approximation of the entire system matrix that is split into a family of submatrices. Large submatrices are stored in factorized representation, while small submatrices are stored in standard representation. This allows reducing memory requirements significantly while improving the efficiency. The poster presents our preliminary results of implementations of the H-matrices into the existing boundary integral approaches based on the boundary element method or the method of fundamental solution.

  15. A Method for Constructing a New Extensible Nomenclature for Clinical Coding Practices in Sub-Saharan Africa.

    PubMed

    Van Laere, Sven; Nyssen, Marc; Verbeke, Frank

    2017-01-01

    Clinical coding is a requirement to provide valuable data for billing, epidemiology and health care resource allocation. In sub-Saharan Africa, we observe a growing awareness of the need for coding of clinical data, not only in health insurances, but also in governments and the hospitals. Presently, coding systems in sub-Saharan Africa are often used for billing purposes. In this paper we consider the use of a nomenclature to also have a clinical impact. Often coding systems are assumed to be complex and too extensive to be used in daily practice. Here, we present a method for constructing a new nomenclature based on existing coding systems by considering a minimal subset in the sub-Saharan region. Evaluation of completeness will be done nationally using the requirements of national registries. The nomenclature requires an extension character for dealing with codes that have to be used for multiple registries. Hospitals will benefit most by using this extension character.

  16. On Modeling Research Work for Describing and Filtering Scientific Information

    NASA Astrophysics Data System (ADS)

    Sicilia, Miguel-Ángel

    Existing models for Research Information Systems (RIS) properly address the description of people and organizations, projects, facilities and their outcomes, e.g. papers, reports or patents. While this is adequate for the recording and accountability of research investments, helping researchers in finding relevant people, organizations or results requires considering both the content of research work and also its context. The content is not only related to the domain area, but it requires modeling methodological issues as variables, instruments or scientific methods that can then be used as search criteria. The context of research work is determined by the ongoing projects or scientific interests of an individual or a group, and can be expressed using the same methodological concepts. However, modeling methodological issues is notably complex and dependent on the scientific discipline and research area. This paper sketches the main requirements for those models, providing some motivating examples that could serve as a point of departure for future attempts in developing an upper ontology for research methods and tools.

  17. Study on Battery Capacity for Grid-connection Power Planning with Forecasts in Clustered Photovoltaic Systems

    NASA Astrophysics Data System (ADS)

    Shimada, Takae; Kawasaki, Norihiro; Ueda, Yuzuru; Sugihara, Hiroyuki; Kurokawa, Kosuke

    This paper aims to clarify the battery capacity required by a residential area with densely grid-connected photovoltaic (PV) systems. This paper proposes a planning method of tomorrow's grid-connection power from/to the external electric power system by using demand power forecasting and insolation forecasting for PV power predictions, and defines a operation method of the electricity storage device to control the grid-connection power as planned. A residential area consisting of 389 houses consuming 2390 MWh/year of electricity with 2390kW PV systems is simulated based on measured data and actual forecasts. The simulation results show that 8.3MWh of battery capacity is required in the conditions of half-hour planning and 1% or less of planning error ratio and PV output limiting loss ratio. The results also show that existing technologies of forecasting reduce required battery capacity to 49%, and increase the allowable installing PV amount to 210%.

  18. Cross validation of gas chromatography-flame photometric detection and gas chromatography-mass spectrometry methods for measuring dialkylphosphate metabolites of organophosphate pesticides in human urine.

    PubMed

    Prapamontol, Tippawan; Sutan, Kunrunya; Laoyang, Sompong; Hongsibsong, Surat; Lee, Grace; Yano, Yukiko; Hunter, Ronald Elton; Ryan, P Barry; Barr, Dana Boyd; Panuwet, Parinya

    2014-01-01

    We report two analytical methods for the measurement of dialkylphosphate (DAP) metabolites of organophosphate pesticides in human urine. These methods were independently developed/modified and implemented in two separate laboratories and cross validated. The aim was to develop simple, cost effective, and reliable methods that could use available resources and sample matrices in Thailand and the United States. While several methods already exist, we found that direct application of these methods required modification of sample preparation and chromatographic conditions to render accurate, reliable data. The problems encountered with existing methods were attributable to urinary matrix interferences, and differences in the pH of urine samples and reagents used during the extraction and derivatization processes. Thus, we provide information on key parameters that require attention during method modification and execution that affect the ruggedness of the methods. The methods presented here employ gas chromatography (GC) coupled with either flame photometric detection (FPD) or electron impact ionization-mass spectrometry (EI-MS) with isotopic dilution quantification. The limits of detection were reported from 0.10ng/mL urine to 2.5ng/mL urine (for GC-FPD), while the limits of quantification were reported from 0.25ng/mL urine to 2.5ng/mL urine (for GC-MS), for all six common DAP metabolites (i.e., dimethylphosphate, dimethylthiophosphate, dimethyldithiophosphate, diethylphosphate, diethylthiophosphate, and diethyldithiophosphate). Each method showed a relative recovery range of 94-119% (for GC-FPD) and 92-103% (for GC-MS), and relative standard deviations (RSD) of less than 20%. Cross-validation was performed on the same set of urine samples (n=46) collected from pregnant women residing in the agricultural areas of northern Thailand. The results from split sample analysis from both laboratories agreed well for each metabolite, suggesting that each method can produce comparable data. In addition, results from analyses of specimens from the German External Quality Assessment Scheme (G-EQUAS) suggested that the GC-FPD method produced accurate results that can be reasonably compared to other studies. Copyright © 2013 Elsevier GmbH. All rights reserved.

  19. A high resolution spatial population database of Somalia for disease risk mapping.

    PubMed

    Linard, Catherine; Alegana, Victor A; Noor, Abdisalan M; Snow, Robert W; Tatem, Andrew J

    2010-09-14

    Millions of Somali have been deprived of basic health services due to the unstable political situation of their country. Attempts are being made to reconstruct the health sector, in particular to estimate the extent of infectious disease burden. However, any approach that requires the use of modelled disease rates requires reasonable information on population distribution. In a low-income country such as Somalia, population data are lacking, are of poor quality, or become outdated rapidly. Modelling methods are therefore needed for the production of contemporary and spatially detailed population data. Here land cover information derived from satellite imagery and existing settlement point datasets were used for the spatial reallocation of populations within census units. We used simple and semi-automated methods that can be implemented with free image processing software to produce an easily updatable gridded population dataset at 100 × 100 meters spatial resolution. The 2010 population dataset was matched to administrative population totals projected by the UN. Comparison tests between the new dataset and existing population datasets revealed important differences in population size distributions, and in population at risk of malaria estimates. These differences are particularly important in more densely populated areas and strongly depend on the settlement data used in the modelling approach. The results show that it is possible to produce detailed, contemporary and easily updatable settlement and population distribution datasets of Somalia using existing data. The 2010 population dataset produced is freely available as a product of the AfriPop Project and can be downloaded from: http://www.afripop.org.

  20. A high resolution spatial population database of Somalia for disease risk mapping

    PubMed Central

    2010-01-01

    Background Millions of Somali have been deprived of basic health services due to the unstable political situation of their country. Attempts are being made to reconstruct the health sector, in particular to estimate the extent of infectious disease burden. However, any approach that requires the use of modelled disease rates requires reasonable information on population distribution. In a low-income country such as Somalia, population data are lacking, are of poor quality, or become outdated rapidly. Modelling methods are therefore needed for the production of contemporary and spatially detailed population data. Results Here land cover information derived from satellite imagery and existing settlement point datasets were used for the spatial reallocation of populations within census units. We used simple and semi-automated methods that can be implemented with free image processing software to produce an easily updatable gridded population dataset at 100 × 100 meters spatial resolution. The 2010 population dataset was matched to administrative population totals projected by the UN. Comparison tests between the new dataset and existing population datasets revealed important differences in population size distributions, and in population at risk of malaria estimates. These differences are particularly important in more densely populated areas and strongly depend on the settlement data used in the modelling approach. Conclusions The results show that it is possible to produce detailed, contemporary and easily updatable settlement and population distribution datasets of Somalia using existing data. The 2010 population dataset produced is freely available as a product of the AfriPop Project and can be downloaded from: http://www.afripop.org. PMID:20840751

  1. Carbon Dioxide Observational Platform System (CO-OPS), feasibility study

    NASA Technical Reports Server (NTRS)

    Bouquet, D. L.; Hall, D. W.; Mcelveen, R. P.

    1987-01-01

    The Carbon Dioxide Observational Platform System (CO-OPS) is a near-space, geostationary, multi-user, unmanned microwave powered monitoring platform system. This systems engineering feasibility study addressed identified existing requirements such as: carbon dioxide observational data requirements, communications requirements, and eye-in-the-sky requirements of other groups like the Defense Department, the Forestry Service, and the Coast Guard. In addition, potential applications in: earth system science, space system sciences, and test and verification (satellite sensors and data management techniques) were considered. The eleven month effort is summarized. Past work and methods of gathering the required observational data were assessed and rough-order-of magnitude cost estimates have shown the CO-OPS system to be most cost effective (less than $30 million within a 10 year lifetime). It was also concluded that there are no technical, schedule, or obstacles that would prevent achieving the objectives of the total 5-year CO-OPS program.

  2. First experiences with an accelerated CMV antigenemia test: CMV Brite Turbo assay.

    PubMed

    Visser, C E; van Zeijl, C J; de Klerk, E P; Schillizi, B M; Beersma, M F; Kroes, A C

    2000-06-01

    Cytomegalovirus disease is still a major problem in immunocompromised patients, such as bone marrow or kidney transplantation patients. The detection of viral antigen in leukocytes (antigenemia) has proven to be a clinically relevant marker of CMV activity and has found widespread application. Because most existing assays are rather time-consuming and laborious, an accelerated version (Brite Turbo) of an existing method (Brite) has been developed. The major modification is in the direct lysis of erythrocytes instead of separation by sedimentation. In this study the Brite Turbo method has been compared with the conventional Brite method to detect CMV antigen pp65 in peripheral blood leukocytes of 107 consecutive immunocompromised patients. Both tests produced similar results. Discrepancies were limited to the lowest positive range and sensitivity and specificity were comparable for both tests. Two major advantages of the Brite Turbo method could be observed in comparison to the original method: assay-time was reduced by more than 50% and only 2 ml of blood was required. An additional advantage was the higher number of positive nuclei in the Brite Turbo method attributable to the increased number of granulocytes in the assay. Early detection of CMV infection or reactivation has become faster and easier with this modified assay.

  3. A Tool for Multiple Targeted Genome Deletions that Is Precise, Scar-Free, and Suitable for Automation.

    PubMed

    Aubrey, Wayne; Riley, Michael C; Young, Michael; King, Ross D; Oliver, Stephen G; Clare, Amanda

    2015-01-01

    Many advances in synthetic biology require the removal of a large number of genomic elements from a genome. Most existing deletion methods leave behind markers, and as there are a limited number of markers, such methods can only be applied a fixed number of times. Deletion methods that recycle markers generally are either imprecise (remove untargeted sequences), or leave scar sequences which can cause genome instability and rearrangements. No existing marker recycling method is automation-friendly. We have developed a novel openly available deletion tool that consists of: 1) a method for deleting genomic elements that can be repeatedly used without limit, is precise, scar-free, and suitable for automation; and 2) software to design the method's primers. Our tool is sequence agnostic and could be used to delete large numbers of coding sequences, promoter regions, transcription factor binding sites, terminators, etc in a single genome. We have validated our tool on the deletion of non-essential open reading frames (ORFs) from S. cerevisiae. The tool is applicable to arbitrary genomes, and we provide primer sequences for the deletion of: 90% of the ORFs from the S. cerevisiae genome, 88% of the ORFs from S. pombe genome, and 85% of the ORFs from the L. lactis genome.

  4. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-06-22

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.

  5. Detection and visualization of storm hydrograph changes under urbanization: an impulse response approach.

    PubMed

    Farahmand, Touraj; Fleming, Sean W; Quilty, Edward J

    2007-10-01

    Urbanization often alters catchment storm responses, with a broad range of potentially significant environmental and engineering consequences. At a practical, site-specific management level, efficient and effective assessment and control of such downstream impacts requires a technical capability to rapidly identify development-induced storm hydrograph changes. The method should also speak specifically to alteration of internal watershed dynamics, require few resources to implement, and provide results that are intuitively accessible to all watershed stakeholders. In this short paper, we propose a potential method which might satisfy these criteria. Our emphasis lies upon the integration of existing concepts to provide tools for pragmatic, relatively low-cost environmental monitoring and management. The procedure involves calibration of rainfall-runoff time-series models in each of several successive time windows, which sample varying degrees of watershed urbanization. As implemented here, only precipitation and stream discharge or stage data are required. The readily generated unit impulse response functions of these time-series models might then provide a mathematically formal, yet visually based and intuitive, representation of changes in watershed storm response. Nominally, the empirical response functions capture such changes as soon as they occur, and the assessments of storm hydrograph alteration are independent of variability in meteorological forcing. We provide a preliminary example of how the technique may be applied using a low-order linear ARX model. The technique may offer a fresh perspective on such watershed management issues, and potentially also several advantages over existing approaches. Substantial further testing is required before attempting to apply the concept as a practical environmental management technique; some possible directions for additional work are suggested.

  6. {open_quotes}Media-On-Demand{close_quotes} multimedia electronic mail: A tool for collaboration on the web

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsoi, Kei Nam; Rahman, S.M.

    1996-12-31

    Undoubtedly, multimedia electronic mail has many advantages in exchanging information electronically in a collaborative work. The existing design of e-mail systems architecture is inefficient in exchanging multimedia message which has much larger volume, and requires more bandwidth and storage space than the text-only messages. This paper presents an innovative method for exchanging multimedia mail messages in a heterogeneous environment to support collaborative work over YAW on the Internet. We propose a {open_quotes}Parcel Collection{close_quotes} approach for exchanging multimedia electronic mail messages. This approach for exchanging multimedia electronic mail messages integrates the current WWW technologies with the existing electronic mail systems.

  7. 40 CFR 63.6604 - What fuel requirements must I meet if I own or operate an existing stationary CI RICE?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... I own or operate an existing stationary CI RICE? 63.6604 Section 63.6604 Protection of Environment....6604 What fuel requirements must I meet if I own or operate an existing stationary CI RICE? If you own or operate an existing non-emergency, non-black start CI stationary RICE with a site rating of more...

  8. How to feed environmental studies with soil information to address SDG 'Zero hunger'

    NASA Astrophysics Data System (ADS)

    Hendriks, Chantal; Stoorvogel, Jetse; Claessens, Lieven

    2017-04-01

    As pledged by UN Sustainable Development Goal (SDG) 2, there should be zero hunger, food security, improved food nutrition and sustainable agriculture by 2030. Environmental studies are essential to reach SDG 2. Soils play a crucial role, especially in addressing 'Zero hunger'. This study aims to discuss the connection between the supply and demand of soil data for environmental studies and how this connection can be improved illustrating different methods. As many studies are resource constrained, the options to collect new soil data are limited. Therefore, it is essential to use existing soil information, auxiliary data and collected field data efficiently. Existing soil data are criticised in literature as i) being dominantly qualitative, ii) being often outdated, iii) being not spatially exhaustive, iv) being only available at general scales, v) being inconsistent, and vi) lacking quality assessments. Additional field data can help to overcome some of these problems. Outdated maps can, for example, be improved by collecting additional soil data in areas where changes in soil properties are expected. Existing soil data can also provide insight in the expected soil variability and, as such, these data can be used for the design of sampling schemes. Existing soil data are also crucial input for studies on digital soil mapping because they give information on parent material and the relative age of soils. Digital soil mapping is commonly applied as an efficient method to quantitatively predict the spatial variation of soil properties. However, the efficiency of digital soil mapping may increase if we look at functional soil properties (e.g. nutrient availability, available water capacity) for the soil profile that vary in a two-dimensional space rather than at basic soil properties of individual soil layers (e.g. texture, organic matter content, nitrogen content) that vary in a three-dimensional space. Digital soil mapping techniques are based on statistical relations between soil properties and environmental variables. However, in some cases a more mechanistic approach, based on pedological knowledge, might be more convincing to predict soil properties. This study showed that the soil science community is able to provide the required soil information for environmental studies. However, there is not a single solution that provides the required soil data. Case studies are needed to prove that certain methods meet the data requirements, whereafter these case studies function as a lighthouse to other studies. We illustrate data availability and methodological innovations for a case study in Kenya, where the CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS) aims to contribute to SDG 2.

  9. Molecular opacities for exoplanets

    PubMed Central

    Bernath, Peter F.

    2014-01-01

    Spectroscopic observations of exoplanets are now possible by transit methods and direct emission. Spectroscopic requirements for exoplanets are reviewed based on existing measurements and model predictions for hot Jupiters and super-Earths. Molecular opacities needed to simulate astronomical observations can be obtained from laboratory measurements, ab initio calculations or a combination of the two approaches. This discussion article focuses mainly on laboratory measurements of hot molecules as needed for exoplanet spectroscopy. PMID:24664921

  10. Recycling used lubricating oil at the deep space stations

    NASA Technical Reports Server (NTRS)

    Koh, J. L.

    1981-01-01

    A comparison is made of the lubricating oil recycling methods used in the Deep Space Station 43 test and the basic requirements which could favor recycling of oil for continuous reuse. The basic conditions for successful recycling are compared to the conditions that exist in the Deep Space Network (DSN). This comparison shows that to recycle used oil in the DSN would not only be expensive but also nonproductive.

  11. Succession planning and leadership development: critical business strategies for healthcare organizations.

    PubMed

    Collins, Sandra K; Collins, Kevin S

    2007-01-01

    As labor shortages intensify, succession planning and leadership development have become strategic initiatives requiring rigorous consideration. Traditional methods of replacing personnel will not accommodate the vacancies expected to plague healthcare organizations. Managers should focus on identifying potential gaps of key personnel and adapting programs to accommodate organizational need. Attention should be placed on capturing the intellectual capital existent in the organization and developing diverse groups of leadership candidates.

  12. Systematic Refinement of a Health Information Technology Time and Motion Workflow Instrument for Inpatient Nursing Care using a Standardized Interface Terminology

    PubMed Central

    Zhang, Yi; Monsen, Karen A; Adam, Terrence J; Pieczkiewicz, David S; Daman, Megan; Melton, Genevieve B

    2011-01-01

    Time and motion (T&M) studies provide an objective method to measure the expenditure of time by clinicians. While some instruments for T&M studies have been designed to evaluate health information technology (HIT), these instruments have not been designed for nursing workflow. We took an existing open source HIT T&M study application designed to evaluate physicians in the ambulatory setting and rationally adapted it through empiric observations to record nursing activities in the inpatient setting and linked this instrument to an existing interface terminology, the Omaha System. Nursing activities involved several dimensions and could include multiple activities occurring simultaneously, requiring significant instrument redesign. 94% of the activities from the study instrument mapped adequately to the Omaha System. T&M study instruments require customization in design optimize them for different environments, such as inpatient nursing, to enable optimal data collection. Interface terminologies show promise as a framework for recording and analyzing T&M study data. PMID:22195228

  13. Applications of automatic differentiation in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.

    1994-01-01

    Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.

  14. A practical model for pressure probe system response estimation (with review of existing models)

    NASA Astrophysics Data System (ADS)

    Hall, B. F.; Povey, T.

    2018-04-01

    The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.

  15. Inter-Coder Agreement in One-to-Many Classification: Fuzzy Kappa.

    PubMed

    Kirilenko, Andrei P; Stepchenkova, Svetlana

    2016-01-01

    Content analysis involves classification of textual, visual, or audio data. The inter-coder agreement is estimated by making two or more coders to classify the same data units, with subsequent comparison of their results. The existing methods of agreement estimation, e.g., Cohen's kappa, require that coders place each unit of content into one and only one category (one-to-one coding) from the pre-established set of categories. However, in certain data domains (e.g., maps, photographs, databases of texts and images), this requirement seems overly restrictive. The restriction could be lifted, provided that there is a measure to calculate the inter-coder agreement in the one-to-many protocol. Building on the existing approaches to one-to-many coding in geography and biomedicine, such measure, fuzzy kappa, which is an extension of Cohen's kappa, is proposed. It is argued that the measure is especially compatible with data from certain domains, when holistic reasoning of human coders is utilized in order to describe the data and access the meaning of communication.

  16. SEQADAPT: an adaptable system for the tracking, storage and analysis of high throughput sequencing experiments.

    PubMed

    Burdick, David B; Cavnor, Chris C; Handcock, Jeremy; Killcoyne, Sarah; Lin, Jake; Marzolf, Bruz; Ramsey, Stephen A; Rovira, Hector; Bressler, Ryan; Shmulevich, Ilya; Boyle, John

    2010-07-14

    High throughput sequencing has become an increasingly important tool for biological research. However, the existing software systems for managing and processing these data have not provided the flexible infrastructure that research requires. Existing software solutions provide static and well-established algorithms in a restrictive package. However as high throughput sequencing is a rapidly evolving field, such static approaches lack the ability to readily adopt the latest advances and techniques which are often required by researchers. We have used a loosely coupled, service-oriented infrastructure to develop SeqAdapt. This system streamlines data management and allows for rapid integration of novel algorithms. Our approach also allows computational biologists to focus on developing and applying new methods instead of writing boilerplate infrastructure code. The system is based around the Addama service architecture and is available at our website as a demonstration web application, an installable single download and as a collection of individual customizable services.

  17. SEQADAPT: an adaptable system for the tracking, storage and analysis of high throughput sequencing experiments

    PubMed Central

    2010-01-01

    Background High throughput sequencing has become an increasingly important tool for biological research. However, the existing software systems for managing and processing these data have not provided the flexible infrastructure that research requires. Results Existing software solutions provide static and well-established algorithms in a restrictive package. However as high throughput sequencing is a rapidly evolving field, such static approaches lack the ability to readily adopt the latest advances and techniques which are often required by researchers. We have used a loosely coupled, service-oriented infrastructure to develop SeqAdapt. This system streamlines data management and allows for rapid integration of novel algorithms. Our approach also allows computational biologists to focus on developing and applying new methods instead of writing boilerplate infrastructure code. Conclusion The system is based around the Addama service architecture and is available at our website as a demonstration web application, an installable single download and as a collection of individual customizable services. PMID:20630057

  18. Modern contact investigation methods for enhancing tuberculosis control in aboriginal communities.

    PubMed

    Cook, Victoria J; Shah, Lena; Gardy, Jennifer

    2012-05-25

    The Aboriginal communities in Canada are challenged by a disproportionate burden of TB infection and disease. Contact investigation (CI) guidelines exist but these strategies do not take into account the unique social structure of different populations. Because of the limitations of traditional CI, new approaches are under investigation and include the use of social network analysis, geographic information systems and genomics, in addition to the widespread use of genotyping to better understand TB transmission. Guidelines for the routine use of network methods and other novel methodologies for TB CI and outbreak investigation do not exist despite the gathering evidence that these approaches can positively impact TB control efforts, even in Aboriginal communities. The feasibility and efficacy of these novel approaches to CI in Aboriginal communities requires further investigation. The successful integration of these novel methodologies will require community involvement, capacity building and ongoing support at every level. The outcome will not only be the systematic collection, analysis, and interpretation of CI data in high-burden communities to assess transmission but the prioritization of contacts who are candidates for treatment of LTBI which will break the cycle of transmission. Ultimately, the measure of success will be a clear and sustained decline in TB incidence in Aboriginal communities.

  19. Costs of Limiting Route Optimization to Published Waypoints in the Traffic Aware Planner

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Wing, David J.

    2013-01-01

    The Traffic Aware Planner (TAP) is an airborne advisory tool that generates optimized, traffic-avoiding routes to support the aircraft crew in making strategic reroute requests to Air Traffic Control (ATC). TAP is derived from a research-prototype self-separation tool, the Autonomous Operations Planner (AOP), in which optimized route modifications that avoid conflicts with traffic and weather, using waypoints at explicit latitudes and longitudes (a technique supported by self-separation concepts), are generated by maneuver patterns applied to the existing route. For use in current-day operations in which trajectory changes must be requested from ATC via voice communication, TAP produces optimized routes described by advisories that use only published waypoints prior to a reconnection waypoint on the existing route. We describe how the relevant algorithms of AOP have been modified to implement this requirement. The modifications include techniques for finding appropriate published waypoints in a maneuver pattern and a method for combining the genetic algorithm of AOP with an exhaustive search of certain types of advisory. We demonstrate methods to investigate the increased computation required by these techniques and to estimate other costs (measured in terms such as time to destination and fuel burned) that may be incurred when only published waypoints are used.

  20. Spectral unmixing of urban land cover using a generic library approach

    NASA Astrophysics Data System (ADS)

    Degerickx, Jeroen; Lordache, Marian-Daniel; Okujeni, Akpona; Hermy, Martin; van der Linden, Sebastian; Somers, Ben

    2016-10-01

    Remote sensing based land cover classification in urban areas generally requires the use of subpixel classification algorithms to take into account the high spatial heterogeneity. These spectral unmixing techniques often rely on spectral libraries, i.e. collections of pure material spectra (endmembers, EM), which ideally cover the large EM variability typically present in urban scenes. Despite the advent of several (semi-) automated EM detection algorithms, the collection of such image-specific libraries remains a tedious and time-consuming task. As an alternative, we suggest the use of a generic urban EM library, containing material spectra under varying conditions, acquired from different locations and sensors. This approach requires an efficient EM selection technique, capable of only selecting those spectra relevant for a specific image. In this paper, we evaluate and compare the potential of different existing library pruning algorithms (Iterative Endmember Selection and MUSIC) using simulated hyperspectral (APEX) data of the Brussels metropolitan area. In addition, we develop a new hybrid EM selection method which is shown to be highly efficient in dealing with both imagespecific and generic libraries, subsequently yielding more robust land cover classification results compared to existing methods. Future research will include further optimization of the proposed algorithm and additional tests on both simulated and real hyperspectral data.

  1. Conceptual design of industrial process displays.

    PubMed

    Pedersen, C R; Lind, M

    1999-11-01

    Today, process displays used in industry are often designed on the basis of piping and instrumentation diagrams without any method of ensuring that the needs of the operators are fulfilled. Therefore, a method for a systematic approach to the design of process displays is needed. This paper discusses aspects of process display design taking into account both the designer's and the operator's points of view. Three aspects are emphasized: the operator tasks, the display content and the display form. The distinction between these three aspects is the basis for proposing an outline for a display design method that matches the industrial practice of modular plant design and satisfies the needs of reusability of display design solutions. The main considerations in display design in the industry are to specify the operator's activities in detail, to extract the information the operators need from the plant design specification and documentation, and finally to present this information. The form of the display is selected from existing standardized display elements such as trend curves, mimic diagrams, ecological interfaces, etc. Further knowledge is required to invent new display elements. That is, knowledge about basic visual means of presenting information and how humans perceive and interpret these means and combinations. This knowledge is required in the systematic selection of graphical items for a given display content. The industrial part of the method is first illustrated in the paper by a simple example from a plant with batch processes. Later the method is applied to develop a supervisory display for a condenser system in a nuclear power plant. The differences between the continuous plant domain of power production and the batch processes from the example are analysed and broad categories of display types are proposed. The problems involved in specification and invention of a supervisory display are analysed and conclusions from these problems are made. It is concluded that the design method proposed provides a framework for the progress of the display design and is useful in pin-pointing the actual problems. The method was useful in reducing the number of existing displays that could fulfil the requirements of the supervision task. The method provided at the same time a framework for dealing with the problems involved in inventing new displays based on structured analysis. However the problems in a systematic approach to display invention still need consideration.

  2. Experimental Verification of a Dynamic Voltage Restorer Capable of Significantly Reducing an Energy-Storage Element

    NASA Astrophysics Data System (ADS)

    Jimichi, Takushi; Fujita, Hideaki; Akagi, Hirofumi

    This paper deals with a dynamic voltage restorer (DVR) characterized by installing the shunt converter at the load side. The DVR can compensate for the load voltage when a voltage sag appears in the supply voltage. An existing DVR requires a large capacitor bank or other energy-storage elements such as double-layer capacitors or batteries. The DVR presented in this paper requires only a small dc capacitor intended for smoothing the dc-link voltage. Moreover, three control methods for the series converter are compared and discussed to reduce the series-converter rating, paying attention to the zero-sequence voltages included in the supply voltage and the compensating voltage. Experimental results obtained from a 200-V, 5-kW laboratory system are shown to verify the viability of the system configuration and the control methods.

  3. PDB_REDO: automated re-refinement of X-ray structure models in the PDB.

    PubMed

    Joosten, Robbie P; Salzemann, Jean; Bloch, Vincent; Stockinger, Heinz; Berglund, Ann-Charlott; Blanchet, Christophe; Bongcam-Rudloff, Erik; Combet, Christophe; Da Costa, Ana L; Deleage, Gilbert; Diarena, Matteo; Fabbretti, Roberto; Fettahi, Géraldine; Flegel, Volker; Gisel, Andreas; Kasam, Vinod; Kervinen, Timo; Korpelainen, Eija; Mattila, Kimmo; Pagni, Marco; Reichstadt, Matthieu; Breton, Vincent; Tickle, Ian J; Vriend, Gert

    2009-06-01

    Structural biology, homology modelling and rational drug design require accurate three-dimensional macromolecular coordinates. However, the coordinates in the Protein Data Bank (PDB) have not all been obtained using the latest experimental and computational methods. In this study a method is presented for automated re-refinement of existing structure models in the PDB. A large-scale benchmark with 16 807 PDB entries showed that they can be improved in terms of fit to the deposited experimental X-ray data as well as in terms of geometric quality. The re-refinement protocol uses TLS models to describe concerted atom movement. The resulting structure models are made available through the PDB_REDO databank (http://www.cmbi.ru.nl/pdb_redo/). Grid computing techniques were used to overcome the computational requirements of this endeavour.

  4. Diet optimization methods can help translate dietary guidelines into a cancer prevention food plan.

    PubMed

    Masset, Gabriel; Monsivais, Pablo; Maillot, Matthieu; Darmon, Nicole; Drewnowski, Adam

    2009-08-01

    Mathematical diet optimization models are used to create food plans that best resemble current eating habits while meeting prespecified nutrition and cost constraints. This study used linear programming to generate food plans meeting the key 2007 dietary recommendations issued by the World Cancer Research Fund/American Institute of Cancer Research (WCRF/AICR). The models were constructed to minimize deviations in food intake between the observed and the WCRF/AICR-recommended diets. Consumption constraints were imposed to prevent food plans from including unreasonable amounts of food from a single group. Consumption norms for nutrients and food groups were taken from dietary intake data for a sample of adult men and women (n = 161) in the Pacific Northwest. Food plans meeting the WCRF/AICR dietary guidelines numbers 3-5 and 7 were lower in refined grains and higher in vegetables and fruits than the existing diets. For this group, achieving cancer prevention goals required little modification of existing diets and had minimal impact on diet quality and cost. By contrast, the need to meet all nutritional needs through diet alone (guideline no. 8) required a large food volume increase and dramatic shifts from the observed food intake patterns. Putting dietary guidelines into practice may require the creation of detailed food plans that are sensitive to existing consumption patterns and food costs. Optimization models provide an elegant mathematical solution that can help determine whether sets of dietary guidelines are achievable by diverse U.S. population subgroups.

  5. Compliance with the Aerospace MACT Standard at Lockheed Martin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurucz, K.L.; Vicars, S.; Fetter, S.

    1997-12-31

    Actions taken and planned at four Lockheed Martin Corporation (LMC) facilities to comply with the Aerospace MACT Standard are reviewed. Many LMC sites have taken proactive steps to reduce emissions and implement low VOC coating technology. Significant administrative, facility, and material challenges remain to achieve compliance with the upcoming NESHAP and Control Technology Guideline (CTG) standards. The facilities discussed herein set up programs to develop and implement compliance strategies. These facilities manufacture military aircraft, missiles, satellites, rockets, and electronic guidance and communications systems. Some of the facilities are gearing up for new production lines subject to new source MACT standards.more » At this time the facilities are reviewing compliance status of all primers, topcoats, maskants and solvents subject to the standard. Facility personnel are searching for the most efficient methods of satisfying the recordkeeping, reporting and monitoring, sections of the standards while simultaneously preparing or reviewing their Title V permit applications. Facility decisions on paint booths are the next highest priority. Existing dry filter paint booths will be subject to the filtration standard for existing paint booths which requires the use of two-stage filters. Planned paint booths for the F-22 program, and other new booths must comply with the standard for new and rebuilt booths which requires three stage or HEPA filters. Facilities looking to replace existing water wash paint booths, and those required to retrofit the air handling equipment to accommodate the two-stage filters, are reviewing issues surrounding the rebuilt source definition.« less

  6. 48 CFR 27.405-4 - Other existing data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... is required to be included in contracts substantially for on-line data base services in the same form... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Other existing data. 27... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Data and Copyrights 27.405-4 Other existing...

  7. 48 CFR 27.405-4 - Other existing data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... is required to be included in contracts substantially for on-line data base services in the same form... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Other existing data. 27... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Data and Copyrights 27.405-4 Other existing...

  8. 48 CFR 27.405-4 - Other existing data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... is required to be included in contracts substantially for on-line data base services in the same form... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Other existing data. 27... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Data and Copyrights 27.405-4 Other existing...

  9. 48 CFR 27.405-4 - Other existing data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... is required to be included in contracts substantially for on-line data base services in the same form... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Other existing data. 27... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Data and Copyrights 27.405-4 Other existing...

  10. 48 CFR 27.405-4 - Other existing data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... is required to be included in contracts substantially for on-line data base services in the same form... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Other existing data. 27... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Data and Copyrights 27.405-4 Other existing...

  11. Estimating Capacity Requirements for Mental Health Services After a Disaster Has Occurred: A Call for New Data

    PubMed Central

    Siegel, Carole E.; Laska, Eugene; Meisner, Morris

    2004-01-01

    Objectives. We sought to estimate the extended mental health service capacity requirements of persons affected by the September 11, 2001, terrorist attacks. Methods. We developed a formula to estimate the extended mental health service capacity requirements following disaster situations and assessed availability of the information required by the formula. Results. Sparse data exist on current services and supports used by people with mental health problems outside of the formal mental health specialty sector. There also are few systematically collected data on mental health sequelae of disasters. Conclusions. We recommend research-based surveys to understand service usage in non–mental health settings and suggest that federal guidelines be established to promote uniform data collection of a core set of items in studies carried out after disasters. PMID:15054009

  12. Interior noise control prediction study for high-speed propeller-driven aircraft

    NASA Technical Reports Server (NTRS)

    Rennison, D. C.; Wilby, J. F.; Marsh, A. H.; Wilby, E. G.

    1979-01-01

    An analytical model was developed to predict the noise levels inside propeller-driven aircraft during cruise at M = 0.8. The model was applied to three study aircraft with fuselages of different size (wide body, narrow body and small diameter) in order to determine the noise reductions required to achieve the goal of an A-weighted sound level which does not exceed 80 dB. The model was then used to determine noise control methods which could achieve the required noise reductions. Two classes of noise control treatments were investigated: add-on treatments which can be added to existing structures, and advanced concepts which would require changes to the fuselage primary structure. Only one treatment, a double wall with limp panel, provided the required noise reductions. Weight penalties associated with the treatment were estimated for the three study aircraft.

  13. High-contrast imaging in multi-star systems: progress in technology development and lab results

    NASA Astrophysics Data System (ADS)

    Belikov, Ruslan; Pluzhnik, Eugene; Bendek, Eduardo; Sirbu, Dan

    2017-09-01

    We present the continued progress and laboratory results advancing the technology readiness of Multi-Star Wavefront Control (MSWC), a method to directly image planets and disks in multi-star systems such as Alpha Centauri. This method works with almost any coronagraph (or external occulter with a DM) and requires little or no change to existing and mature hardware. In particular, it works with single-star coronagraphs and does not require the off-axis star(s) to be coronagraphically suppressed. Because of the ubiquity of multistar systems, this method increases the science yield of many missions and concepts such as WFIRST, Exo-C/S, HabEx, LUVOIR, and potentially enables the detection of Earthlike planets (if they exist) around our nearest neighbor star, Alpha Centauri, with a small and low-cost space telescope such as ACESat. Our lab demonstrations were conducted at the Ames Coronagraph Experiment (ACE) laboratory and show both the feasibility as well as the trade-offs involved in using MSWC. We show several simulations and laboratory tests at roughly TRL-3 corresponding to representative targets and missions, including Alpha Centauri with WFIRST. In particular, we demonstrate MSWC in Super-Nyquist mode, where the distance between the desired dark zone and the off-axis star is larger than the conventional (sub-Nyquist) control range of the DM. Our laboratory tests did not yet include a coronagraph, but did demonstrate significant speckle suppression from two independent light sources at sub- as well as super-Nyquist separations.

  14. A sequential test for assessing observed agreement between raters.

    PubMed

    Bersimis, Sotiris; Sachlas, Athanasios; Chakraborti, Subha

    2018-01-01

    Assessing the agreement between two or more raters is an important topic in medical practice. Existing techniques, which deal with categorical data, are based on contingency tables. This is often an obstacle in practice as we have to wait for a long time to collect the appropriate sample size of subjects to construct the contingency table. In this paper, we introduce a nonparametric sequential test for assessing agreement, which can be applied as data accrues, does not require a contingency table, facilitating a rapid assessment of the agreement. The proposed test is based on the cumulative sum of the number of disagreements between the two raters and a suitable statistic representing the waiting time until the cumulative sum exceeds a predefined threshold. We treat the cases of testing two raters' agreement with respect to one or more characteristics and using two or more classification categories, the case where the two raters extremely disagree, and finally the case of testing more than two raters' agreement. The numerical investigation shows that the proposed test has excellent performance. Compared to the existing methods, the proposed method appears to require significantly smaller sample size with equivalent power. Moreover, the proposed method is easily generalizable and brings the problem of assessing the agreement between two or more raters and one or more characteristics under a unified framework, thus providing an easy to use tool to medical practitioners. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Single-Image Distance Measurement by a Smart Mobile Device.

    PubMed

    Chen, Shangwen; Fang, Xianyong; Shen, Jianbing; Wang, Linbo; Shao, Ling

    2017-12-01

    Existing distance measurement methods either require multiple images and special photographing poses or only measure the height with a special view configuration. We propose a novel image-based method that can measure various types of distance from single image captured by a smart mobile device. The embedded accelerometer is used to determine the view orientation of the device. Consequently, pixels can be back-projected to the ground, thanks to the efficient calibration method using two known distances. Then the distance in pixel is transformed to a real distance in centimeter with a linear model parameterized by the magnification ratio. Various types of distance specified in the image can be computed accordingly. Experimental results demonstrate the effectiveness of the proposed method.

  16. An efficient temporal database design method based on EER

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Huang, Jiping; Miao, Hua

    2007-12-01

    Many existing methods of modeling temporal information are based on logical model, which makes relational schema optimization more difficult and more complicated. In this paper, based on the conventional EER model, the author attempts to analyse and abstract temporal information in the phase of conceptual modelling according to the concrete requirement to history information. Then a temporal data model named BTEER is presented. BTEER not only retains all designing ideas and methods of EER which makes BTEER have good upward compatibility, but also supports the modelling of valid time and transaction time effectively at the same time. In addition, BTEER can be transformed to EER easily and automatically. It proves in practice, this method can model the temporal information well.

  17. Photogrammetry and Videogrammetry Methods Development for Solar Sail Structures. Masters Thesis awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S. (Technical Monitor); Black, Jonathan T.

    2003-01-01

    This report discusses the development and application of metrology methods called photogrammetry and videogrammetry that make accurate measurements from photographs. These methods have been adapted for the static and dynamic characterization of gossamer structures, as four specific solar sail applications demonstrate. The applications prove that high-resolution, full-field, non-contact static measurements of solar sails using dot projection photogrammetry are possible as well as full-field, non-contact, dynamic characterization using dot projection videogrammetry. The accuracy of the measurement of the resonant frequencies and operating deflection shapes that were extracted surpassed expectations. While other non-contact measurement methods exist, they are not full-field and require significantly more time to take data.

  18. Method of Testing Oxygen Regulators

    NASA Technical Reports Server (NTRS)

    Sontag, Harcourt; Borlik, E L

    1935-01-01

    Oxygen regulators are used in aircraft to regulate automatically the flow of oxygen to the pilot from a cylinder at pressures ranging up to 150 atmospheres. The instruments are adjusted to open at an altitude of about 15,000 ft. and thereafter to deliver oxygen at a rate which increases with the altitude. The instruments are tested to determine the rate of flow of oxygen delivered at various altitudes and to detect any mechanical defects which may exist. A method of testing oxygen regulators was desired in which the rate of flow could be determined more accurately than by the test method previously used (reference 1) and by which instruments defective mechanically could be detected. The new method of test fulfills these requirements.

  19. Why are Formal Methods Not Used More Widely?

    NASA Technical Reports Server (NTRS)

    Knight, John C.; DeJong, Colleen L.; Gibble, Matthew S.; Nakano, Luis G.

    1997-01-01

    Despite extensive development over many years and significant demonstrated benefits, formal methods remain poorly accepted by industrial practitioners. Many reasons have been suggested for this situation such as a claim that they extent the development cycle, that they require difficult mathematics, that inadequate tools exist, and that they are incompatible with other software packages. There is little empirical evidence that any of these reasons is valid. The research presented here addresses the question of why formal methods are not used more widely. The approach used was to develop a formal specification for a safety-critical application using several specification notations and assess the results in a comprehensive evaluation framework. The results of the experiment suggests that there remain many impediments to the routine use of formal methods.

  20. Bound-preserving Legendre-WENO finite volume schemes using nonlinear mapping

    NASA Astrophysics Data System (ADS)

    Smith, Timothy; Pantano, Carlos

    2017-11-01

    We present a new method to enforce field bounds in high-order Legendre-WENO finite volume schemes. The strategy consists of reconstructing each field through an intermediate mapping, which by design satisfies realizability constraints. Determination of the coefficients of the polynomial reconstruction involves nonlinear equations that are solved using Newton's method. The selection between the original or mapped reconstruction is implemented dynamically to minimize computational cost. The method has also been generalized to fields that exhibit interdependencies, requiring multi-dimensional mappings. Further, the method does not depend on the existence of a numerical flux function. We will discuss details of the proposed scheme and show results for systems in conservation and non-conservation form. This work was funded by the NSF under Grant DMS 1318161.

  1. a Spatiotemporal Aggregation Query Method Using Multi-Thread Parallel Technique Based on Regional Division

    NASA Astrophysics Data System (ADS)

    Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.

    2015-07-01

    Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.

  2. Susceptibility Testing of Medically Important Parasites.

    PubMed

    Genetu Bayih, Abebe; Debnath, Anjan; Mitre, Edward; Huston, Christopher D; Laleu, Benoît; Leroy, Didier; Blasco, Benjamin; Campo, Brice; Wells, Timothy N C; Willis, Paul A; Sjö, Peter; Van Voorhis, Wesley C; Pillai, Dylan R

    2017-07-01

    In the last 2 decades, renewed attention to neglected tropical diseases (NTDs) has spurred the development of antiparasitic agents, especially in light of emerging drug resistance. The need for new drugs has required in vitro screening methods using parasite culture. Furthermore, clinical laboratories sought to correlate in vitro susceptibility methods with treatment outcomes, most notably with malaria. Parasites with their various life cycles present greater complexity than bacteria, for which standardized susceptibility methods exist. This review catalogs the state-of-the-art methodologies used to evaluate the effects of drugs on key human parasites from the point of view of drug discovery as well as the need for laboratory methods that correlate with clinical outcomes. Copyright © 2017 American Society for Microbiology.

  3. THE DYNAMICS OF MERGING CLUSTERS: A MONTE CARLO SOLUTION APPLIED TO THE BULLET AND MUSKET BALL CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, William A., E-mail: wadawson@ucdavis.edu

    2013-08-01

    Merging galaxy clusters have become one of the most important probes of dark matter, providing evidence for dark matter over modified gravity and even constraints on the dark matter self-interaction cross-section. To properly constrain the dark matter cross-section it is necessary to understand the dynamics of the merger, as the inferred cross-section is a function of both the velocity of the collision and the observed time since collision. While the best understanding of merging system dynamics comes from N-body simulations, these are computationally intensive and often explore only a limited volume of the merger phase space allowed by observed parametermore » uncertainty. Simple analytic models exist but the assumptions of these methods invalidate their results near the collision time, plus error propagation of the highly correlated merger parameters is unfeasible. To address these weaknesses I develop a Monte Carlo method to discern the properties of dissociative mergers and propagate the uncertainty of the measured cluster parameters in an accurate and Bayesian manner. I introduce this method, verify it against an existing hydrodynamic N-body simulation, and apply it to two known dissociative mergers: 1ES 0657-558 (Bullet Cluster) and DLSCL J0916.2+2951 (Musket Ball Cluster). I find that this method surpasses existing analytic models-providing accurate (10% level) dynamic parameter and uncertainty estimates throughout the merger history. This, coupled with minimal required a priori information (subcluster mass, redshift, and projected separation) and relatively fast computation ({approx}6 CPU hours), makes this method ideal for large samples of dissociative merging clusters.« less

  4. Addressing unmeasured confounding in comparative observational research.

    PubMed

    Zhang, Xiang; Faries, Douglas E; Li, Hu; Stamey, James D; Imbens, Guido W

    2018-04-01

    Observational pharmacoepidemiological studies can provide valuable information on the effectiveness or safety of interventions in the real world, but one major challenge is the existence of unmeasured confounder(s). While many analytical methods have been developed for dealing with this challenge, they appear under-utilized, perhaps due to the complexity and varied requirements for implementation. Thus, there is an unmet need to improve understanding the appropriate course of action to address unmeasured confounding under a variety of research scenarios. We implemented a stepwise search strategy to find articles discussing the assessment of unmeasured confounding in electronic literature databases. Identified publications were reviewed and characterized by the applicable research settings and information requirements required for implementing each method. We further used this information to develop a best practice recommendation to help guide the selection of appropriate analytical methods for assessing the potential impact of unmeasured confounding. Over 100 papers were reviewed, and 15 methods were identified. We used a flowchart to illustrate the best practice recommendation which was driven by 2 critical components: (1) availability of information on the unmeasured confounders; and (2) goals of the unmeasured confounding assessment. Key factors for implementation of each method were summarized in a checklist to provide further assistance to researchers for implementing these methods. When assessing comparative effectiveness or safety in observational research, the impact of unmeasured confounding should not be ignored. Instead, we suggest quantitatively evaluating the impact of unmeasured confounding and provided a best practice recommendation for selecting appropriate analytical methods. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Earthquake mechanisms from linear-programming inversion of seismic-wave amplitude ratios

    USGS Publications Warehouse

    Julian, B.R.; Foulger, G.R.

    1996-01-01

    The amplitudes of radiated seismic waves contain far more information about earthquake source mechanisms than do first-motion polarities, but amplitudes are severely distorted by the effects of heterogeneity in the Earth. This distortion can be reduced greatly by using the ratios of amplitudes of appropriately chosen seismic phases, rather than simple amplitudes, but existing methods for inverting amplitude ratios are severely nonlinear and require computationally intensive searching methods to ensure that solutions are globally optimal. Searching methods are particularly costly if general (moment tensor) mechanisms are allowed. Efficient linear-programming methods, which do not suffer from these problems, have previously been applied to inverting polarities and wave amplitudes. We extend these methods to amplitude ratios, in which formulation on inequality constraint for an amplitude ratio takes the same mathematical form as a polarity observation. Three-component digital data for an earthquake at the Hengill-Grensdalur geothermal area in southwestern Iceland illustrate the power of the method. Polarities of P, SH, and SV waves, unusually well distributed on the focal sphere, cannot distinguish between diverse mechanisms, including a double couple. Amplitude ratios, on the other hand, clearly rule out the double-couple solution and require a large explosive isotropic component.

  6. A Hidden Markov Model Approach for Simultaneously Estimating Local Ancestry and Admixture Time Using Next Generation Sequence Data in Samples of Arbitrary Ploidy

    PubMed Central

    Nielsen, Rasmus

    2017-01-01

    Admixture—the mixing of genomes from divergent populations—is increasingly appreciated as a central process in evolution. To characterize and quantify patterns of admixture across the genome, a number of methods have been developed for local ancestry inference. However, existing approaches have a number of shortcomings. First, all local ancestry inference methods require some prior assumption about the expected ancestry tract lengths. Second, existing methods generally require genotypes, which is not feasible to obtain for many next-generation sequencing projects. Third, many methods assume samples are diploid, however a wide variety of sequencing applications will fail to meet this assumption. To address these issues, we introduce a novel hidden Markov model for estimating local ancestry that models the read pileup data, rather than genotypes, is generalized to arbitrary ploidy, and can estimate the time since admixture during local ancestry inference. We demonstrate that our method can simultaneously estimate the time since admixture and local ancestry with good accuracy, and that it performs well on samples of high ploidy—i.e. 100 or more chromosomes. As this method is very general, we expect it will be useful for local ancestry inference in a wider variety of populations than what previously has been possible. We then applied our method to pooled sequencing data derived from populations of Drosophila melanogaster on an ancestry cline on the east coast of North America. We find that regions of local recombination rates are negatively correlated with the proportion of African ancestry, suggesting that selection against foreign ancestry is the least efficient in low recombination regions. Finally we show that clinal outlier loci are enriched for genes associated with gene regulatory functions, consistent with a role of regulatory evolution in ecological adaptation of admixed D. melanogaster populations. Our results illustrate the potential of local ancestry inference for elucidating fundamental evolutionary processes. PMID:28045893

  7. A rapid method for preparation of the cerebrospinal fluid proteome.

    PubMed

    Larssen, Eivind; Brede, Cato; Hjelle, Anne Bjørnstad; Øysaed, Kjell Birger; Tjensvoll, Anne Bolette; Omdal, Roald; Ruoff, Peter

    2015-01-01

    The cerebrospinal fluid (CSF) proteome is of great interest for investigation of diseases and conditions involving the CNS. However, the presence of high-abundance proteins (HAPs) can interfere with the detection of low-abundance proteins, potentially hindering the discovery of new biomarkers. Therefore, an assessment of the CSF subproteome composition requires depletion strategies. Existing methods are time consuming, often involving multistep protocols. Here, we present a rapid, accurate, and reproducible method for preparing the CSF proteome, which allows the identification of a high number of proteins. This method involves acetonitrile (ACN) precipitation for depleting HAPs, followed by immediate trypsination. As an example, we demonstrate that this method allows discrimination between multiple sclerosis patients and healthy subjects. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. 40 CFR 63.6604 - What fuel requirements must I meet if I own or operate an existing stationary CI RICE?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... I own or operate an existing stationary CI RICE? 63.6604 Section 63.6604 Protection of Environment....6604 What fuel requirements must I meet if I own or operate an existing stationary CI RICE? If you own or operate an existing non-emergency CI stationary RICE with a site rating of more than 300 brake HP...

  9. Analysis of high aspect ratio jet flap wings of arbitrary geometry.

    NASA Technical Reports Server (NTRS)

    Lissaman, P. B. S.

    1973-01-01

    Paper presents a design technique for rapidly computing lift, induced drag, and spanwise loading of unswept jet flap wings of arbitrary thickness, chord, twist, blowing, and jet angle, including discontinuities. Linear theory is used, extending Spence's method for elliptically loaded jet flap wings. Curves for uniformly blown rectangular wings are presented for direct performance estimation. Arbitrary planforms require a simple computer program. Method of reducing wing to equivalent stretched, twisted, unblown planform for hand calculation is also given. Results correlate with limited existing data, and show lifting line theory is reasonable down to aspect ratios of 5.

  10. In-memory integration of existing software components for parallel adaptive unstructured mesh workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett

    Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.

  11. In-memory integration of existing software components for parallel adaptive unstructured mesh workflows

    DOE PAGES

    Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett; ...

    2017-01-01

    Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.

  12. An Electron/Photon/Relaxation Data Library for MCNP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, III, H. Grady

    The capabilities of the MCNP6 Monte Carlo code in simulation of electron transport, photon transport, and atomic relaxation have recently been significantly expanded. The enhancements include not only the extension of existing data and methods to lower energies, but also the introduction of new categories of data and methods. Support of these new capabilities has required major additions to and redesign of the associated data tables. In this paper we present the first complete documentation of the contents and format of the new electron-photon-relaxation data library now available with the initial production release of MCNP6.

  13. Program for the solution of multipoint boundary value problems of quasilinear differential equations

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Linear equations are solved by a method of superposition of solutions of a sequence of initial value problems. For nonlinear equations and/or boundary conditions, the solution is iterative and in each iteration a problem like the linear case is solved. A simple Taylor series expansion is used for the linearization of both nonlinear equations and nonlinear boundary conditions. The perturbation method of solution is used in preference to quasilinearization because of programming ease, and smaller storage requirements; and experiments indicate that the desired convergence properties exist although no proof or convergence is given.

  14. Trend of telomerase activity change during human iPSC self-renewal and differentiation revealed by a quartz crystal microbalance based assay

    NASA Astrophysics Data System (ADS)

    Zhou, Yitian; Zhou, Ping; Xin, Yinqiang; Wang, Jie; Zhu, Zhiqiang; Hu, Ji; Wei, Shicheng; Ma, Hongwei

    2014-11-01

    Telomerase plays an important role in governing the life span of cells for its capacity to extend telomeres. As high activity of telomerase has been found in stem cells and cancer cells specifically, various methods have been developed for the evaluation of telomerase activity. To overcome the time-consuming procedures and complicated manipulations of existing methods, we developed a novel method named Telomeric Repeat Elongation Assay based on Quartz crystal microbalance (TREAQ) to monitor telomerase activity during the self-renewal and differentiation of human induced pluripotent stem cells (hiPSCs). TREAQ results indicated hiPSCs possess invariable telomerase activity for 11 passages on Matrigel and a steady decline of telomerase activity when differentiated for different periods, which is confirmed with existing golden standard method. The pluripotency of hiPSCs during differentiation could be estimated through monitoring telomerase activity and compared with the expression levels of markers of pluripotency gene via quantitative real time PCR. Regular assessment for factors associated with pluripotency or stemness was expensive and requires excessive sample consuming, thus TREAQ could be a promising alternative technology for routine monitoring of telomerase activity and estimate the pluripotency of stem cells.

  15. Automated Detection of Electroencephalography Artifacts in Human, Rodent and Canine Subjects using Machine Learning.

    PubMed

    Levitt, Joshua; Nitenson, Adam; Koyama, Suguru; Heijmans, Lonne; Curry, James; Ross, Jason T; Kamerling, Steven; Saab, Carl Y

    2018-06-23

    Electroencephalography (EEG) invariably contains extra-cranial artifacts that are commonly dealt with based on qualitative and subjective criteria. Failure to account for EEG artifacts compromises data interpretation. We have developed a quantitative and automated support vector machine (SVM)-based algorithm to accurately classify artifactual EEG epochs in awake rodent, canine and humans subjects. An embodiment of this method also enables the determination of 'eyes open/closed' states in human subjects. The levels of SVM accuracy for artifact classification in humans, Sprague Dawley rats and beagle dogs were 94.17%, 83.68%, and 85.37%, respectively, whereas 'eyes open/closed' states in humans were labeled with 88.60% accuracy. Each of these results was significantly higher than chance. Comparison with Existing Methods: Other existing methods, like those dependent on Independent Component Analysis, have not been tested in non-human subjects, and require full EEG montages, instead of only single channels, as this method does. We conclude that our EEG artifact detection algorithm provides a valid and practical solution to a common problem in the quantitative analysis and assessment of EEG in pre-clinical research settings across evolutionary spectra. Copyright © 2018. Published by Elsevier B.V.

  16. De novo protein structure prediction by dynamic fragment assembly and conformational space annealing.

    PubMed

    Lee, Juyong; Lee, Jinhyuk; Sasaki, Takeshi N; Sasai, Masaki; Seok, Chaok; Lee, Jooyoung

    2011-08-01

    Ab initio protein structure prediction is a challenging problem that requires both an accurate energetic representation of a protein structure and an efficient conformational sampling method for successful protein modeling. In this article, we present an ab initio structure prediction method which combines a recently suggested novel way of fragment assembly, dynamic fragment assembly (DFA) and conformational space annealing (CSA) algorithm. In DFA, model structures are scored by continuous functions constructed based on short- and long-range structural restraint information from a fragment library. Here, DFA is represented by the full-atom model by CHARMM with the addition of the empirical potential of DFIRE. The relative contributions between various energy terms are optimized using linear programming. The conformational sampling was carried out with CSA algorithm, which can find low energy conformations more efficiently than simulated annealing used in the existing DFA study. The newly introduced DFA energy function and CSA sampling algorithm are implemented into CHARMM. Test results on 30 small single-domain proteins and 13 template-free modeling targets of the 8th Critical Assessment of protein Structure Prediction show that the current method provides comparable and complementary prediction results to existing top methods. Copyright © 2011 Wiley-Liss, Inc.

  17. Adapting Preclinical Benchmarks for First-in-Human Trials of Human Embryonic Stem Cell-Based Therapies.

    PubMed

    Barazzetti, Gaia; Hurst, Samia A; Mauron, Alexandre

    2016-08-01

    : As research on human embryonic stem cell (hESC)-based therapies is moving from the laboratory to the clinic, there is an urgent need to assess when it can be ethically justified to make the step from preclinical studies to the first protocols involving human subjects. We examined existing regulatory frameworks stating preclinical requirements relevant to the move to first-in-human (FIH) trials and assessed how they may be applied in the context of hESC-based interventions to best protect research participants. Our findings show that some preclinical benchmarks require rethinking (i.e., identity, purity), while others need to be specified (i.e., potency, viability), owing to the distinctive dynamic heterogeneity of hESC-based products, which increases uncertainty and persistence of safety risks and allows for limited predictions of effects in vivo. Rethinking or adaptation of how to apply preclinical benchmarks in specific cases will be required repeatedly for different hESC-based products. This process would benefit from mutual learning if researchers included these components in the description of their methods in publications. To design translational research with an eye to protecting human participants in early trials, researchers and regulators need to start their efforts at the preclinical stage. Existing regulatory frameworks for preclinical research, however, are not really adapted to this in the case of stem cell translational medicine. This article reviews existing regulatory frameworks for preclinical requirements and assesses how their underlying principles may best be applied in the context of human embryonic stem cell-based interventions for the therapy of Parkinson's disease. This research will help to address the question of when it is ethically justified to start first-in-human trials in stem cell translational medicine. ©AlphaMed Press.

  18. SU-E-T-649: Quality Assurances for Proton Therapy Delivery Equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arjomandy, B; Kase, Y; Flanz, J

    2015-06-15

    Purpose: The number of proton therapy centers has increased dramatically over the past decade. Currently, there is no comprehensive set of guidelines that addresses quality assurance (QA) procedures for the different technologies used for proton therapy. The AAPM has charged task group 224 (TG-224) to provide recommendations for QA required for accurate and safe dose delivery, using existing and next generation proton therapy delivery equipment. Methods: A database comprised of QA procedures and tolerance limits was generated from many existing proton therapy centers in and outside of the US. These consist of proton therapy centers that possessed double scattering, uniformmore » scanning, and pencil beams delivery systems. The diversity in beam delivery systems as well as the existing devices to perform QA checks for different beam parameters is the main subject of TG-224. Based on current practice at the clinically active proton centers participating in this task group, consensus QA recommendations were developed. The methodologies and requirements of the parameters that must be verified for consistency of the performance of the proton beam delivery systems are discussed. Results: TG-224 provides procedures and QA checks for mechanical, imaging, safety and dosimetry requirements for different proton equipment. These procedures are categorized based on their importance and their required frequencies in order to deliver a safe and consistent dose. The task group provides daily, weekly, monthly, and annual QA check procedures with their tolerance limits. Conclusions: The procedures outlined in this protocol provide sufficient information to qualified medical physicists to perform QA checks for any proton delivery system. Execution of these procedures should provide confidence that proton therapy equipment is functioning as commissioned for patient treatment and delivers dose safely and accurately within the established tolerance limits. The report will be published in late 2015.« less

  19. Breakthrough Propulsion Physics Project: Project Management Methods

    NASA Technical Reports Server (NTRS)

    Millis, Marc G.

    2004-01-01

    To leap past the limitations of existing propulsion, the NASA Breakthrough Propulsion Physics (BPP) Project seeks further advancements in physics from which new propulsion methods can eventually be derived. Three visionary breakthroughs are sought: (1) propulsion that requires no propellant, (2) propulsion that circumvents existing speed limits, and (3) breakthrough methods of energy production to power such devices. Because these propulsion goals are presumably far from fruition, a special emphasis is to identify credible research that will make measurable progress toward these goals in the near-term. The management techniques to address this challenge are presented, with a special emphasis on the process used to review, prioritize, and select research tasks. This selection process includes these key features: (a) research tasks are constrained to only address the immediate unknowns, curious effects or critical issues, (b) reliability of assertions is more important than the implications of the assertions, which includes the practice where the reviewers judge credibility rather than feasibility, and (c) total scores are obtained by multiplying the criteria scores rather than by adding. Lessons learned and revisions planned are discussed.

  20. Algicidal bacteria in the sea and their impact on algal blooms.

    PubMed

    Mayali, Xavier; Azam, Farooq

    2004-01-01

    Over the past two decades, many reports have revealed the existence of bacteria capable of killing phytoplankton. These algicidal bacteria sometimes increase in abundance concurrently with the decline of algal blooms, suggesting that they may affect algal bloom dynamics. Here, we synthesize the existing knowledge on algicidal bacteria interactions with marine eukaryotic microalgae. We discuss the effectiveness of the current methods to characterize the algicidal phenotype in an ecosystem context. We briefly consider the literature on the phylogenetic identification of algicidal bacteria, their interaction with their algal prey, the characterization of algicidal molecules, and the enumeration of algicidal bacteria during algal blooms. We conclude that, due to limitations of current methods, the evidence for algicidal bacteria causing algal bloom decline is circumstantial. New methods and an ecosystem approach are needed to test hypotheses on the impact of algicidal bacteria in algal bloom dynamics. This will require enlarging the scope of inquiry from its current focus on the potential utility of algicidal bacteria in the control of harmful algal blooms. We suggest conceptualizing bacterial algicidy within the general problem of bacterial regulation of algal community structure in the ocean.

Top