Sample records for robust physical methods

  1. 2016 KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrington, David Bradley; Waters, Jiajia

    Los Alamos National Laboratory and its collaborators are facilitating engine modeling by improving accuracy and robustness of the modeling, and improving the robustness of software. We also continue to improve the physical modeling methods. We are developing and implementing new mathematical algorithms, those that represent the physics within an engine. We provide software that others may use directly or that they may alter with various models e.g., sophisticated chemical kinetics, different turbulent closure methods or other fuel injection and spray systems.

  2. Multiple methods integration for structural mechanics analysis and design

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Aminpour, M. A.

    1991-01-01

    A new research area of multiple methods integration is proposed for joining diverse methods of structural mechanics analysis which interact with one another. Three categories of multiple methods are defined: those in which a physical interface are well defined; those in which a physical interface is not well-defined, but selected; and those in which the interface is a mathematical transformation. Two fundamental integration procedures are presented that can be extended to integrate various methods (e.g., finite elements, Rayleigh Ritz, Galerkin, and integral methods) with one another. Since the finite element method will likely be the major method to be integrated, its enhanced robustness under element distortion is also examined and a new robust shell element is demonstrated.

  3. Methods for compressible multiphase flows and their applications

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choe, Y.; Kim, H.; Min, D.; Kim, C.

    2018-06-01

    This paper presents an efficient and robust numerical framework to deal with multiphase real-fluid flows and their broad spectrum of engineering applications. A homogeneous mixture model incorporated with a real-fluid equation of state and a phase change model is considered to calculate complex multiphase problems. As robust and accurate numerical methods to handle multiphase shocks and phase interfaces over a wide range of flow speeds, the AUSMPW+_N and RoeM_N schemes with a system preconditioning method are presented. These methods are assessed by extensive validation problems with various types of equation of state and phase change models. Representative realistic multiphase phenomena, including the flow inside a thermal vapor compressor, pressurization in a cryogenic tank, and unsteady cavitating flow around a wedge, are then investigated as application problems. With appropriate physical modeling followed by robust and accurate numerical treatments, compressible multiphase flow physics such as phase changes, shock discontinuities, and their interactions are well captured, confirming the suitability of the proposed numerical framework to wide engineering applications.

  4. What Did They Learn in School Today? A Method for Exploring Aspects of Learning in Physical Education

    ERIC Educational Resources Information Center

    Quennerstedt, Mikael; Annerstedt, Claes; Barker, Dean; Karlefors, Inger; Larsson, Håkan; Redelius, Karin; Öhman, Marie

    2014-01-01

    This paper outlines a method for exploring learning in educational practice. The suggested method combines an explicit learning theory with robust methodological steps in order to explore aspects of learning in school physical education. The design of the study is based on sociocultural learning theory, and the approach adds to previous research…

  5. Recovering Galaxy Properties Using Gaussian Process SED Fitting

    NASA Astrophysics Data System (ADS)

    Iyer, Kartheik; Awan, Humna

    2018-01-01

    Information about physical quantities like the stellar mass, star formation rates, and ages for distant galaxies is contained in their spectral energy distributions (SEDs), obtained through photometric surveys like SDSS, CANDELS, LSST etc. However, noise in the photometric observations often is a problem, and using naive machine learning methods to estimate physical quantities can result in overfitting the noise, or converging on solutions that lie outside the physical regime of parameter space.We use Gaussian Process regression trained on a sample of SEDs corresponding to galaxies from a Semi-Analytic model (Somerville+15a) to estimate their stellar masses, and compare its performance to a variety of different methods, including simple linear regression, Random Forests, and k-Nearest Neighbours. We find that the Gaussian Process method is robust to noise and predicts not only stellar masses but also their uncertainties. The method is also robust in the cases where the distribution of the training data is not identical to the target data, which can be extremely useful when generalized to more subtle galaxy properties.

  6. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    NASA Astrophysics Data System (ADS)

    Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier-Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.

  7. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawai, Soshi, E-mail: kawai@cfd.mech.tohoku.ac.jp; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture themore » steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.« less

  8. Robust control of combustion instabilities

    NASA Astrophysics Data System (ADS)

    Hong, Boe-Shong

    Several interactive dynamical subsystems, each of which has its own time-scale and physical significance, are decomposed to build a feedback-controlled combustion- fluid robust dynamics. On the fast-time scale, the phenomenon of combustion instability is corresponding to the internal feedback of two subsystems: acoustic dynamics and flame dynamics, which are parametrically dependent on the slow-time-scale mean-flow dynamics controlled for global performance by a mean-flow controller. This dissertation constructs such a control system, through modeling, analysis and synthesis, to deal with model uncertainties, environmental noises and time- varying mean-flow operation. Conservation law is decomposed as fast-time acoustic dynamics and slow-time mean-flow dynamics, served for synthesizing LPV (linear parameter varying)- L2-gain robust control law, in which a robust observer is embedded for estimating and controlling the internal status, while achieving trade- offs among robustness, performances and operation. The robust controller is formulated as two LPV-type Linear Matrix Inequalities (LMIs), whose numerical solver is developed by finite-element method. Some important issues related to physical understanding and engineering application are discussed in simulated results of the control system.

  9. Robust optimization with transiently chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Sumi, R.; Molnár, B.; Ercsey-Ravasz, M.

    2014-05-01

    Efficiently solving hard optimization problems has been a strong motivation for progress in analog computing. In a recent study we presented a continuous-time dynamical system for solving the NP-complete Boolean satisfiability (SAT) problem, with a one-to-one correspondence between its stable attractors and the SAT solutions. While physical implementations could offer great efficiency, the transiently chaotic dynamics raises the question of operability in the presence of noise, unavoidable on analog devices. Here we show that the probability of finding solutions is robust to noise intensities well above those present on real hardware. We also developed a cellular neural network model realizable with analog circuits, which tolerates even larger noise intensities. These methods represent an opportunity for robust and efficient physical implementations.

  10. The Space-Time Conservation Element and Solution Element Method: A New High-Resolution and Genuinely Multidimensional Paradigm for Solving Conservation Laws. 1; The Two Dimensional Time Marching Schemes

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen

    1998-01-01

    A new high resolution and genuinely multidimensional numerical method for solving conservation laws is being, developed. It was designed to avoid the limitations of the traditional methods. and was built from round zero with extensive physics considerations. Nevertheless, its foundation is mathmatically simple enough that one can build from it a coherent, robust. efficient and accurate numerical framework. Two basic beliefs that set the new method apart from the established methods are at the core of its development. The first belief is that, in order to capture physics more efficiently and realistically, the modeling, focus should be placed on the original integral form of the physical conservation laws, rather than the differential form. The latter form follows from the integral form under the additional assumption that the physical solution is smooth, an assumption that is difficult to realize numerically in a region of rapid chance. such as a boundary layer or a shock. The second belief is that, with proper modeling of the integral and differential forms themselves, the resulting, numerical solution should automatically be consistent with the properties derived front the integral and differential forms, e.g., the jump conditions across a shock and the properties of characteristics. Therefore a much simpler and more robust method can be developed by not using the above derived properties explicitly.

  11. Analysis of entropy extraction efficiencies in random number generation systems

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; Han, Zheng-Fu

    2016-05-01

    Random numbers (RNs) have applications in many areas: lottery games, gambling, computer simulation, and, most importantly, cryptography [N. Gisin et al., Rev. Mod. Phys. 74 (2002) 145]. In cryptography theory, the theoretical security of the system calls for high quality RNs. Therefore, developing methods for producing unpredictable RNs with adequate speed is an attractive topic. Early on, despite the lack of theoretical support, pseudo RNs generated by algorithmic methods performed well and satisfied reasonable statistical requirements. However, as implemented, those pseudorandom sequences were completely determined by mathematical formulas and initial seeds, which cannot introduce extra entropy or information. In these cases, “random” bits are generated that are not at all random. Physical random number generators (RNGs), which, in contrast to algorithmic methods, are based on unpredictable physical random phenomena, have attracted considerable research interest. However, the way that we extract random bits from those physical entropy sources has a large influence on the efficiency and performance of the system. In this manuscript, we will review and discuss several randomness extraction schemes that are based on radiation or photon arrival times. We analyze the robustness, post-processing requirements and, in particular, the extraction efficiency of those methods to aid in the construction of efficient, compact and robust physical RNG systems.

  12. Robust Control Design for Systems With Probabilistic Uncertainty

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a reliability- and robustness-based formulation for robust control synthesis for systems with probabilistic uncertainty. In a reliability-based formulation, the probability of violating design requirements prescribed by inequality constraints is minimized. In a robustness-based formulation, a metric which measures the tendency of a random variable/process to cluster close to a target scalar/function is minimized. A multi-objective optimization procedure, which combines stability and performance requirements in time and frequency domains, is used to search for robustly optimal compensators. Some of the fundamental differences between the proposed strategy and conventional robust control methods are: (i) unnecessary conservatism is eliminated since there is not need for convex supports, (ii) the most likely plants are favored during synthesis allowing for probabilistic robust optimality, (iii) the tradeoff between robust stability and robust performance can be explored numerically, (iv) the uncertainty set is closely related to parameters with clear physical meaning, and (v) compensators with improved robust characteristics for a given control structure can be synthesized.

  13. Optimal design of loudspeaker arrays for robust cross-talk cancellation using the Taguchi method and the genetic algorithm.

    PubMed

    Bai, Mingsian R; Tung, Chih-Wei; Lee, Chih-Chung

    2005-05-01

    An optimal design technique of loudspeaker arrays for cross-talk cancellation with application in three-dimensional audio is presented. An array focusing scheme is presented on the basis of the inverse propagation that relates the transducers to a set of chosen control points. Tikhonov regularization is employed in designing the inverse cancellation filters. An extensive analysis is conducted to explore the cancellation performance and robustness issues. To best compromise the performance and robustness of the cross-talk cancellation system, optimal configurations are obtained with the aid of the Taguchi method and the genetic algorithm (GA). The proposed systems are further justified by physical as well as subjective experiments. The results reveal that large number of loudspeakers, closely spaced configuration, and optimal control point design all contribute to the robustness of cross-talk cancellation systems (CCS) against head misalignment.

  14. Integrated direct/indirect adaptive robust motion trajectory tracking control of pneumatic cylinders

    NASA Astrophysics Data System (ADS)

    Meng, Deyuan; Tao, Guoliang; Zhu, Xiaocong

    2013-09-01

    This paper studies the precision motion trajectory tracking control of a pneumatic cylinder driven by a proportional-directional control valve. An integrated direct/indirect adaptive robust controller is proposed. The controller employs a physical model based indirect-type parameter estimation to obtain reliable estimates of unknown model parameters, and utilises a robust control method with dynamic compensation type fast adaptation to attenuate the effects of parameter estimation errors, unmodelled dynamics and disturbances. Due to the use of projection mapping, the robust control law and the parameter adaption algorithm can be designed separately. Since the system model uncertainties are unmatched, the recursive backstepping technology is adopted to design the robust control law. Extensive comparative experimental results are presented to illustrate the effectiveness of the proposed controller and its performance robustness to parameter variations and sudden disturbances.

  15. Efficient and robust relaxation procedures for multi-component mixtures including phase transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Ee, E-mail: eehan@math.uni-bremen.de; Hantke, Maren, E-mail: maren.hantke@ovgu.de; Müller, Siegfried, E-mail: mueller@igpm.rwth-aachen.de

    We consider a thermodynamic consistent multi-component model in multi-dimensions that is a generalization of the classical two-phase flow model of Baer and Nunziato. The exchange of mass, momentum and energy between the phases is described by additional source terms. Typically these terms are handled by relaxation procedures. Available relaxation procedures suffer from efficiency and robustness resulting in very costly computations that in general only allow for one-dimensional computations. Therefore we focus on the development of new efficient and robust numerical methods for relaxation processes. We derive exact procedures to determine mechanical and thermal equilibrium states. Further we introduce a novelmore » iterative method to treat the mass transfer for a three component mixture. All new procedures can be extended to an arbitrary number of inert ideal gases. We prove existence, uniqueness and physical admissibility of the resulting states and convergence of our new procedures. Efficiency and robustness of the procedures are verified by means of numerical computations in one and two space dimensions. - Highlights: • We develop novel relaxation procedures for a generalized, thermodynamically consistent Baer–Nunziato type model. • Exact procedures for mechanical and thermal relaxation procedures avoid artificial parameters. • Existence, uniqueness and physical admissibility of the equilibrium states are proven for special mixtures. • A novel iterative method for mass transfer is introduced for a three component mixture providing a unique and admissible equilibrium state.« less

  16. Nonlinearly preconditioned semismooth Newton methods for variational inequality solution of two-phase flow in porous media

    NASA Astrophysics Data System (ADS)

    Yang, Haijian; Sun, Shuyu; Yang, Chao

    2017-03-01

    Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.

  17. A novel method for accurate needle-tip identification in trans-rectal ultrasound-based high-dose-rate prostate brachytherapy.

    PubMed

    Zheng, Dandan; Todor, Dorin A

    2011-01-01

    In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  18. Robust bidirectional links for photonic quantum networks

    PubMed Central

    Xu, Jin-Shi; Yung, Man-Hong; Xu, Xiao-Ye; Tang, Jian-Shun; Li, Chuan-Feng; Guo, Guang-Can

    2016-01-01

    Optical fibers are widely used as one of the main tools for transmitting not only classical but also quantum information. We propose and report an experimental realization of a promising method for creating robust bidirectional quantum communication links through paired optical polarization-maintaining fibers. Many limitations of existing protocols can be avoided with the proposed method. In particular, the path and polarization degrees of freedom are combined to deterministically create a photonic decoherence-free subspace without the need for any ancillary photon. This method is input state–independent, robust against dephasing noise, postselection-free, and applicable bidirectionally. To rigorously quantify the amount of quantum information transferred, the optical fibers are analyzed with the tools developed in quantum communication theory. These results not only suggest a practical means for protecting quantum information sent through optical quantum networks but also potentially provide a new physical platform for enriching the structure of the quantum communication theory. PMID:26824069

  19. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data.

    PubMed

    Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun

    2018-01-01

    Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  20. Analysis of Anderson Acceleration on a Simplified Neutronics/Thermal Hydraulics System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toth, Alex; Kelley, C. T.; Slattery, Stuart R

    ABSTRACT A standard method for solving coupled multiphysics problems in light water reactors is Picard iteration, which sequentially alternates between solving single physics applications. This solution approach is appealing due to simplicity of implementation and the ability to leverage existing software packages to accurately solve single physics applications. However, there are several drawbacks in the convergence behavior of this method; namely slow convergence and the necessity of heuristically chosen damping factors to achieve convergence in many cases. Anderson acceleration is a method that has been seen to be more robust and fast converging than Picard iteration for many problems, withoutmore » significantly higher cost per iteration or complexity of implementation, though its effectiveness in the context of multiphysics coupling is not well explored. In this work, we develop a one-dimensional model simulating the coupling between the neutron distribution and fuel and coolant properties in a single fuel pin. We show that this model generally captures the convergence issues noted in Picard iterations which couple high-fidelity physics codes. We then use this model to gauge potential improvements with regard to rate of convergence and robustness from utilizing Anderson acceleration as an alternative to Picard iteration.« less

  1. Kids in the city study: research design and methodology

    PubMed Central

    2011-01-01

    Background Physical activity is essential for optimal physical and psychological health but substantial declines in children's activity levels have occurred in New Zealand and internationally. Children's independent mobility (i.e., outdoor play and traveling to destinations unsupervised), an integral component of physical activity in childhood, has also declined radically in recent decades. Safety-conscious parenting practices, car reliance and auto-centric urban design have converged to produce children living increasingly sedentary lives. This research investigates how urban neighborhood environments can support or enable or restrict children's independent mobility, thereby influencing physical activity accumulation and participation in daily life. Methods/Design The study is located in six Auckland, New Zealand neighborhoods, diverse in terms of urban design attributes, particularly residential density. Participants comprise 160 children aged 9-11 years and their parents/caregivers. Objective measures (global positioning systems, accelerometers, geographical information systems, observational audits) assessed children's independent mobility and physical activity, neighborhood infrastructure, and streetscape attributes. Parent and child neighborhood perceptions and experiences were assessed using qualitative research methods. Discussion This study is one of the first internationally to examine the association of specific urban design attributes with child independent mobility. Using robust, appropriate, and best practice objective measures, this study provides robust epidemiological information regarding the relationships between the built environment and health outcomes for this population. PMID:21781341

  2. Robust Statistical Detection of Power-Law Cross-Correlation.

    PubMed

    Blythe, Duncan A J; Nikulin, Vadim V; Müller, Klaus-Robert

    2016-06-02

    We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram.

  3. Robust Statistical Detection of Power-Law Cross-Correlation

    PubMed Central

    Blythe, Duncan A. J.; Nikulin, Vadim V.; Müller, Klaus-Robert

    2016-01-01

    We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram. PMID:27250630

  4. Rendering-based video-CT registration with physical constraints for image-guided endoscopic sinus surgery

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Leonard, S.; Reiter, A.; Rajan, P.; Siewerdsen, J. H.; Ishii, M.; Taylor, R. H.; Hager, G. D.

    2015-03-01

    We present a system for registering the coordinate frame of an endoscope to pre- or intra- operatively acquired CT data based on optimizing the similarity metric between an endoscopic image and an image predicted via rendering of CT. Our method is robust and semi-automatic because it takes account of physical constraints, specifically, collisions between the endoscope and the anatomy, to initialize and constrain the search. The proposed optimization method is based on a stochastic optimization algorithm that evaluates a large number of similarity metric functions in parallel on a graphics processing unit. Images from a cadaver and a patient were used for evaluation. The registration error was 0.83 mm and 1.97 mm for cadaver and patient images respectively. The average registration time for 60 trials was 4.4 seconds. The patient study demonstrated robustness of the proposed algorithm against a moderate anatomical deformation.

  5. Robust control of accelerators

    NASA Astrophysics Data System (ADS)

    Joel, W.; Johnson, D.; Chaouki, Abdallah T.

    1991-07-01

    The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modelling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control method leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this article, we report on our research progress. In section 1, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section 2, the results of our proof-of-principle experiments are presented. In section 3, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf. without demodulating, compensating, and then remodulating.

  6. Topology of foreign exchange markets using hierarchical structure methods

    NASA Astrophysics Data System (ADS)

    Naylor, Michael J.; Rose, Lawrence C.; Moyle, Brendan J.

    2007-08-01

    This paper uses two physics derived hierarchical techniques, a minimal spanning tree and an ultrametric hierarchical tree, to extract a topological influence map for major currencies from the ultrametric distance matrix for 1995-2001. We find that these two techniques generate a defined and robust scale free network with meaningful taxonomy. The topology is shown to be robust with respect to method, to time horizon and is stable during market crises. This topology, appropriately used, gives a useful guide to determining the underlying economic or regional causal relationships for individual currencies and to understanding the dynamics of exchange rate price determination as part of a complex network.

  7. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.

  8. Comparison of linear and nonlinear programming approaches for "worst case dose" and "minmax" robust optimization of intensity-modulated proton therapy dose distributions.

    PubMed

    Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino

    2017-03-01

    Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet tight dose limits. For robust optimization, the worst case dose approach was less sensitive to uncertainties than was the minmax approach for the prostate and skull-based cancer patients, whereas the minmax approach was superior for the head and neck cancer patients. The robustness of the IMPT plans was remarkably better after robust optimization than after PTV-based optimization, and the NLP-PTV-based optimization outperformed the LP-PTV-based optimization regarding robustness of clinical target volume coverage. In addition, plans generated using LP-based methods had notably fewer scanning spots than did those generated using NLP-based methods. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  9. A Novel Wearable Sensor-Based Human Activity Recognition Approach Using Artificial Hydrocarbon Networks.

    PubMed

    Ponce, Hiram; Martínez-Villaseñor, María de Lourdes; Miralles-Pechuán, Luis

    2016-07-05

    Human activity recognition has gained more interest in several research communities given that understanding user activities and behavior helps to deliver proactive and personalized services. There are many examples of health systems improved by human activity recognition. Nevertheless, the human activity recognition classification process is not an easy task. Different types of noise in wearable sensors data frequently hamper the human activity recognition classification process. In order to develop a successful activity recognition system, it is necessary to use stable and robust machine learning techniques capable of dealing with noisy data. In this paper, we presented the artificial hydrocarbon networks (AHN) technique to the human activity recognition community. Our artificial hydrocarbon networks novel approach is suitable for physical activity recognition, noise tolerance of corrupted data sensors and robust in terms of different issues on data sensors. We proved that the AHN classifier is very competitive for physical activity recognition and is very robust in comparison with other well-known machine learning methods.

  10. Robustness and structure of complex networks

    NASA Astrophysics Data System (ADS)

    Shao, Shuai

    This dissertation covers the two major parts of my PhD research on statistical physics and complex networks: i) modeling a new type of attack -- localized attack, and investigating robustness of complex networks under this type of attack; ii) discovering the clustering structure in complex networks and its influence on the robustness of coupled networks. Complex networks appear in every aspect of our daily life and are widely studied in Physics, Mathematics, Biology, and Computer Science. One important property of complex networks is their robustness under attacks, which depends crucially on the nature of attacks and the structure of the networks themselves. Previous studies have focused on two types of attack: random attack and targeted attack, which, however, are insufficient to describe many real-world damages. Here we propose a new type of attack -- localized attack, and study the robustness of complex networks under this type of attack, both analytically and via simulation. On the other hand, we also study the clustering structure in the network, and its influence on the robustness of a complex network system. In the first part, we propose a theoretical framework to study the robustness of complex networks under localized attack based on percolation theory and generating function method. We investigate the percolation properties, including the critical threshold of the phase transition pc and the size of the giant component Pinfinity. We compare localized attack with random attack and find that while random regular (RR) networks are more robust against localized attack, Erdoḧs-Renyi (ER) networks are equally robust under both types of attacks. As for scale-free (SF) networks, their robustness depends crucially on the degree exponent lambda. The simulation results show perfect agreement with theoretical predictions. We also test our model on two real-world networks: a peer-to-peer computer network and an airline network, and find that the real-world networks are much more vulnerable to localized attack compared with random attack. In the second part, we extend the tree-like generating function method to incorporating clustering structure in complex networks. We study the robustness of a complex network system, especially a network of networks (NON) with clustering structure in each network. We find that the system becomes less robust as we increase the clustering coefficient of each network. For a partially dependent network system, we also find that the influence of the clustering coefficient on network robustness decreases as we decrease the coupling strength, and the critical coupling strength qc, at which the first-order phase transition changes to second-order, increases as we increase the clustering coefficient.

  11. Conditioning and Robustness of RNA Boltzmann Sampling under Thermodynamic Parameter Perturbations.

    PubMed

    Rogers, Emily; Murrugarra, David; Heitsch, Christine

    2017-07-25

    Understanding how RNA secondary structure prediction methods depend on the underlying nearest-neighbor thermodynamic model remains a fundamental challenge in the field. Minimum free energy (MFE) predictions are known to be "ill conditioned" in that small changes to the thermodynamic model can result in significantly different optimal structures. Hence, the best practice is now to sample from the Boltzmann distribution, which generates a set of suboptimal structures. Although the structural signal of this Boltzmann sample is known to be robust to stochastic noise, the conditioning and robustness under thermodynamic perturbations have yet to be addressed. We present here a mathematically rigorous model for conditioning inspired by numerical analysis, and also a biologically inspired definition for robustness under thermodynamic perturbation. We demonstrate the strong correlation between conditioning and robustness and use its tight relationship to define quantitative thresholds for well versus ill conditioning. These resulting thresholds demonstrate that the majority of the sequences are at least sample robust, which verifies the assumption of sampling's improved conditioning over the MFE prediction. Furthermore, because we find no correlation between conditioning and MFE accuracy, the presence of both well- and ill-conditioned sequences indicates the continued need for both thermodynamic model refinements and alternate RNA structure prediction methods beyond the physics-based ones. Copyright © 2017. Published by Elsevier Inc.

  12. Deep learning and model predictive control for self-tuning mode-locked lasers

    NASA Astrophysics Data System (ADS)

    Baumeister, Thomas; Brunton, Steven L.; Nathan Kutz, J.

    2018-03-01

    Self-tuning optical systems are of growing importance in technological applications such as mode-locked fiber lasers. Such self-tuning paradigms require {\\em intelligent} algorithms capable of inferring approximate models of the underlying physics and discovering appropriate control laws in order to maintain robust performance for a given objective. In this work, we demonstrate the first integration of a {\\em deep learning} (DL) architecture with {\\em model predictive control} (MPC) in order to self-tune a mode-locked fiber laser. Not only can our DL-MPC algorithmic architecture approximate the unknown fiber birefringence, it also builds a dynamical model of the laser and appropriate control law for maintaining robust, high-energy pulses despite a stochastically drifting birefringence. We demonstrate the effectiveness of this method on a fiber laser which is mode-locked by nonlinear polarization rotation. The method advocated can be broadly applied to a variety of optical systems that require robust controllers.

  13. Using Velocity Anisotropy to Analyze Magnetohydrodynamic Turbulence in Giant Molecular Clouds

    NASA Astrophysics Data System (ADS)

    Madrid, Alecio; Hernandez, Audra

    2018-01-01

    Structure function (SF) analysis is a strong tool for gaging the Alfvénic properties of magnetohydrodynamic (MHD) simulations, yet there is a lack of literature rigorously investigating limitations in the context of radio spectroscopy. This study takes an in depth approach to studying the limitations of SF analysis for analyzing MHD turbulence in giant molecular cloud (GMC) spectroscopy data. MHD turbulence plays a critical role in the structure and evolution of GMCs as well as in the formation of sub-structures known to spawn stellar progenitors. Existing methods of detection are neither economical nor robust (e.g. dust polarization), and nowhere is this more clear than in the theoretical-observational divide in current literature. A significant limitation of GMC spectroscopy results from the large variation in methods used for extracting GMCs from survey data. Thus, a robust method for studying MHD turbulence must correctly gauge physical properties regardless of the data extraction method used. While SF analysis has demonstrated strong potential across a range of simulated conditions, this study finds significant concern regarding its feasibility as a robust tool in GMC spectroscopy.

  14. An uncertainty principle for star formation - II. A new method for characterising the cloud-scale physics of star formation and feedback across cosmic history

    NASA Astrophysics Data System (ADS)

    Kruijssen, J. M. Diederik; Schruba, Andreas; Hygate, Alexander P. S.; Hu, Chia-Yu; Haydon, Daniel T.; Longmore, Steven N.

    2018-05-01

    The cloud-scale physics of star formation and feedback represent the main uncertainty in galaxy formation studies. Progress is hampered by the limited empirical constraints outside the restricted environment of the Local Group. In particular, the poorly-quantified time evolution of the molecular cloud lifecycle, star formation, and feedback obstructs robust predictions on the scales smaller than the disc scale height that are resolved in modern galaxy formation simulations. We present a new statistical method to derive the evolutionary timeline of molecular clouds and star-forming regions. By quantifying the excess or deficit of the gas-to-stellar flux ratio around peaks of gas or star formation tracer emission, we directly measure the relative rarity of these peaks, which allows us to derive their lifetimes. We present a step-by-step, quantitative description of the method and demonstrate its practical application. The method's accuracy is tested in nearly 300 experiments using simulated galaxy maps, showing that it is capable of constraining the molecular cloud lifetime and feedback time-scale to <0.1 dex precision. Access to the evolutionary timeline provides a variety of additional physical quantities, such as the cloud-scale star formation efficiency, the feedback outflow velocity, the mass loading factor, and the feedback energy or momentum coupling efficiencies to the ambient medium. We show that the results are robust for a wide variety of gas and star formation tracers, spatial resolutions, galaxy inclinations, and galaxy sizes. Finally, we demonstrate that our method can be applied out to high redshift (z≲ 4) with a feasible time investment on current large-scale observatories. This is a major shift from previous studies that constrained the physics of star formation and feedback in the immediate vicinity of the Sun.

  15. The cardiovascular robustness hypothesis: Unmasking young adults' hidden risk for premature cardiovascular death.

    PubMed

    Kraushaar, Lutz E; Dressel, Alexander

    2018-03-01

    An undetected high risk for premature death of cardiovascular disease (CVD) among individuals with low-to-moderate risk factor levels is an acknowledged obstacle to CVD prevention. In this paper, we present the hypothesis that the vasculature's robustness against risk factor load will complement conventional risk factor models as a novel stratifier of risk. Figuratively speaking, mortality risk prediction without robustness scoring is akin to predicting the breaking risk of a lake's ice sheet considering load only while disregarding the sheet's bearing strength. Taking the cue from systems biology, which defines robustness as the ability to maintain function against internal and external challenges, we develop a robustness score from the physical parameters that comprehensively quantitate cardiovascular function. We derive the functional parameters using a recently introduced novel system, VascAssist 2 (iSYMED GmbH, Butzbach, Germany). VascAssist 2 (VA) applies the electronic-hydraulic analogy to a digital model of the arterial tree, replicating non-invasively acquired pule pressure waves by modulating the electronic equivalents of the physical parameters that describe in vivo arterial hemodynamics. As the latter is also subject to aging-associated degeneration which (a) progresses at inter-individually different rates, and which (b) affects the biomarker-mortality association, we express the robustness score as a correction factor to calendar age (CA), the dominant risk factor in all CVD risk factor models. We then propose a method for the validation of the score against known time-to-event data in reference populations. Our conceptualization of robustness implies that risk factor-challenged individuals with low robustness scores will face preferential elimination from the population resulting in a significant robustness-CA correlation in this strata absent in the unchallenged stratum. Hence, we also present an outline of a cross-sectional study design suitable to test this hypothesis. We finally discuss the objections that may validly be raised against our robustness hypothesis, and how available evidence encourages us to refute these objections. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Gradient-based Electrical Properties Tomography (gEPT): a Robust Method for Mapping Electrical Properties of Biological Tissues In Vivo Using Magnetic Resonance Imaging

    PubMed Central

    Liu, Jiaen; Zhang, Xiaotong; Schmitter, Sebastian; Van de Moortele, Pierre-Francois; He, Bin

    2014-01-01

    Purpose To develop high-resolution electrical properties tomography (EPT) methods and investigate a gradient-based EPT (gEPT) approach which aims to reconstruct the electrical properties (EP), including conductivity and permittivity, of an imaged sample from experimentally measured B1 maps with improved boundary reconstruction and robustness against measurement noise. Theory and Methods Using a multi-channel transmit/receive stripline head coil, with acquired B1 maps for each coil element, by assuming negligible Bz component compared to transverse B1 components, a theory describing the relationship between B1 field, EP value and their spatial gradient has been proposed. The final EP images were obtained through spatial integration over the reconstructed EP gradient. Numerical simulation, physical phantom and in vivo human experiments at 7 T have been conducted to evaluate the performance of the proposed methods. Results Reconstruction results were compared with target EP values in both simulations and phantom experiments. Human experimental results were compared with EP values in literature. Satisfactory agreement was observed with improved boundary reconstruction. Importantly, the proposed gEPT method proved to be more robust against noise when compared to previously described non-gradient-based EPT approaches. Conclusion The proposed gEPT approach holds promises to improve EP mapping quality by recovering the boundary information and enhancing robustness against noise. PMID:25213371

  17. Post-Fisherian Experimentation: From Physical to Virtual

    DOE PAGES

    Jeff Wu, C. F.

    2014-04-24

    Fisher's pioneering work in design of experiments has inspired further work with broader applications, especially in industrial experimentation. Three topics in physical experiments are discussed: principles of effect hierarchy, sparsity, and heredity for factorial designs, a new method called CME for de-aliasing aliased effects, and robust parameter design. The recent emergence of virtual experiments on a computer is reviewed. Here, some major challenges in computer experiments, which must go beyond Fisherian principles, are outlined.

  18. Adiabatic gate teleportation.

    PubMed

    Bacon, Dave; Flammia, Steven T

    2009-09-18

    The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.

  19. Robustness-Based Design Optimization Under Data Uncertainty

    NASA Technical Reports Server (NTRS)

    Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence

    2010-01-01

    This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.

  20. Methodology to explore interactions between the water system and society in order to identify adaptation strategies

    NASA Astrophysics Data System (ADS)

    Offermans, A. G. E.; Haasnoot, M.

    2009-04-01

    Development of sustainable water management strategies involves analysing current and future vulnerability, identification of adaptation possibilities, effect analysis and evaluation of the strategies under different possible futures. Recent studies on water management often followed the pressure-effect chain and compared the state of social, economic and ecological functions of the water systems in one or two future situations with the current situation. The future is, however, more complex and dynamic. Water management faces major challenges to cope with future uncertainties in both the water system as well as the social system. Uncertainties in our water system relate to (changes in) drivers and pressures and their effects on the state, like the effects of climate change on discharges. Uncertainties in the social world relate to changing of perceptions, objectives and demands concerning water (management), which are often related with the aforementioned changes in the physical environment. The methodology presented here comprises the 'Perspectives method', derived from the Cultural Theory, a method on analyzing and classifying social response to social and natural states and pressures. The method will be used for scenario analysis and to identify social responses including changes in perspectives and management strategies. The scenarios and responses will be integrated within a rapid assessment tool. The purpose of the tool is to provide users with insight about the interaction of the social and physical system and to identify robust water management strategies by analysing the effectiveness under different possible futures on the physical, social and socio-economic system. This method allows for a mutual interaction between the physical and social system. We will present the theoretical background of the perspectives method as well as a historical overview of perspective changes in the Dutch Meuse area to show how social and physical systems interrelate. We will also show how the integration of both can contribute to the identification of robust water management strategies.

  1. Advances in continuum kinetic and gyrokinetic simulations of turbulence on open-field line geometries

    NASA Astrophysics Data System (ADS)

    Hakim, Ammar; Shi, Eric; Juno, James; Bernard, Tess; Hammett, Greg

    2017-10-01

    For weakly collisional (or collisionless) plasmas, kinetic effects are required to capture the physics of micro-turbulence. We have implemented solvers for kinetic and gyrokinetic equations in the computational plasma physics framework, Gkeyll. We use a version of discontinuous Galerkin scheme that conserves energy exactly. Plasma sheaths are modeled with novel boundary conditions. Positivity of distribution functions is maintained via a reconstruction method, allowing robust simulations that continue to conserve energy even with positivity limiters. We have performed a large number of benchmarks, verifying the accuracy and robustness of our code. We demonstrate the application of our algorithm to two classes of problems (a) Vlasov-Maxwell simulations of turbulence in a magnetized plasma, applicable to space plasmas; (b) Gyrokinetic simulations of turbulence in open-field-line geometries, applicable to laboratory plasmas. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.

  2. Robust Crossfeed Design for Hovering Rotorcraft

    NASA Technical Reports Server (NTRS)

    Catapang, David R.

    1993-01-01

    Control law design for rotorcraft fly-by-wire systems normally attempts to decouple angular responses using fixed-gain crossfeeds. This approach can lead to poor decoupling over the frequency range of pilot inputs and increase the load on the feedback loops. In order to improve the decoupling performance, dynamic crossfeeds may be adopted. Moreover, because of the large changes that occur in rotorcraft dynamics due to small changes about the nominal design condition, especially for near-hovering flight, the crossfeed design must be 'robust'. A new low-order matching method is presented here to design robust crossfeed compensators for multi-input, multi-output (MIMO) systems. The technique identifies degrees-of-freedom that can be decoupled using crossfeeds, given an anticipated set of parameter variations for the range of flight conditions of concern. Cross-coupling is then reduced for degrees-of-freedom that can use crossfeed compensation by minimizing off-axis response magnitude average and variance. Results are presented for the analysis of pitch, roll, yaw and heave coupling of the UH-60 Black Hawk helicopter in near-hovering flight. Robust crossfeeds are designed that show significant improvement in decoupling performance and robustness over nominal, single design point, compensators. The design method and results are presented in an easily used graphical format that lends significant physical insight to the design procedure. This plant pre-compensation technique is an appropriate preliminary step to the design of robust feedback control laws for rotorcraft.

  3. Robust quantum control using smooth pulses and topological winding

    NASA Astrophysics Data System (ADS)

    Barnes, Edwin; Wang, Xin

    2015-03-01

    Perhaps the greatest challenge in achieving control of microscopic quantum systems is the decoherence induced by the environment, a problem which pervades experimental quantum physics and is particularly severe in the context of solid state quantum computing and nanoscale quantum devices because of the inherently strong coupling to the surrounding material. We present an analytical approach to constructing intrinsically robust driving fields which automatically cancel the leading-order noise-induced errors in a qubit's evolution exactly. We address two of the most common types of non-Markovian noise that arise in qubits: slow fluctuations of the qubit energy splitting and fluctuations in the driving field itself. We demonstrate our method by constructing robust quantum gates for several types of spin qubits, including phosphorous donors in silicon and nitrogen-vacancy centers in diamond. Our results constitute an important step toward achieving robust generic control of quantum systems, bringing their novel applications closer to realization. Work supported by LPS-CMTC.

  4. A new method for teaching physical examination to junior medical students

    PubMed Central

    Sayma, Meelad; Williams, Hywel Rhys

    2016-01-01

    Introduction Teaching effective physical examination is a key component in the education of medical students. Preclinical medical students often have insufficient clinical knowledge to apply to physical examination recall, which may hinder their learning when taught through certain understanding-based models. This pilot project aimed to develop a method to teach physical examination to preclinical medical students using “core clinical cases”, overcoming the need for “rote” learning. Methods This project was developed utilizing three cycles of planning, action, and reflection. Thematic analysis of feedback was used to improve this model, and ensure it met student expectations. Results and discussion A model core clinical case developed in this project is described, with gout as the basis for a “foot and ankle” examination. Key limitations and difficulties encountered on implementation of this pilot are discussed for future users, including the difficulty encountered in “content overload”. Conclusion This approach aims to teach junior medical students physical examination through understanding, using a simulated patient environment. Robust research is now required to demonstrate efficacy and repeatability in the physical examination of other systems. PMID:26937208

  5. Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Pampell, Alyssa

    2011-01-01

    A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.

  6. Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.

    PubMed

    Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D

    2011-05-01

    Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.

  7. Resolution-Enhanced Harmonic and Interharmonic Measurement for Power Quality Analysis in Cyber-Physical Energy System.

    PubMed

    Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin

    2016-06-27

    Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system.

  8. Resolution-Enhanced Harmonic and Interharmonic Measurement for Power Quality Analysis in Cyber-Physical Energy System

    PubMed Central

    Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin

    2016-01-01

    Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system. PMID:27355946

  9. Credit allocation for research institutes

    NASA Astrophysics Data System (ADS)

    Wang, J.-P.; Guo, Q.; Yang, K.; Han, J.-T.; Liu, J.-G.

    2017-05-01

    It is a challenging work to assess research performance of multiple institutes. Considering that it is unfair to average the credit to the institutes which is in the different order from a paper, in this paper, we present a credit allocation method (CAM) with a weighted order coefficient for multiple institutes. The results for the APS dataset with 18987 institutes show that top-ranked institutes obtained by the CAM method correspond to well-known universities or research labs with high reputation in physics. Moreover, we evaluate the performance of the CAM method when citation links are added or rewired randomly quantified by the Kendall's Tau and Jaccard index. The experimental results indicate that the CAM method has better performance in robustness compared with the total number of citations (TC) method and Shen's method. Finally, we give the first 20 Chinese universities in physics obtained by the CAM method. However, this method is valid for any other branch of sciences, not just for physics. The proposed method also provides universities and policy makers an effective tool to quantify and balance the academic performance of university.

  10. A new method for teaching physical examination to junior medical students.

    PubMed

    Sayma, Meelad; Williams, Hywel Rhys

    2016-01-01

    Teaching effective physical examination is a key component in the education of medical students. Preclinical medical students often have insufficient clinical knowledge to apply to physical examination recall, which may hinder their learning when taught through certain understanding-based models. This pilot project aimed to develop a method to teach physical examination to preclinical medical students using "core clinical cases", overcoming the need for "rote" learning. This project was developed utilizing three cycles of planning, action, and reflection. Thematic analysis of feedback was used to improve this model, and ensure it met student expectations. A model core clinical case developed in this project is described, with gout as the basis for a "foot and ankle" examination. Key limitations and difficulties encountered on implementation of this pilot are discussed for future users, including the difficulty encountered in "content overload". This approach aims to teach junior medical students physical examination through understanding, using a simulated patient environment. Robust research is now required to demonstrate efficacy and repeatability in the physical examination of other systems.

  11. Rugged spin-polarized electron sources based on negative electron affinity GaAs photocathode with robust Cs2Te coating

    NASA Astrophysics Data System (ADS)

    Bae, Jai Kwan; Cultrera, Luca; DiGiacomo, Philip; Bazarov, Ivan

    2018-04-01

    Photocathodes capable of providing high intensity and highly spin-polarized electron beams with long operational lifetimes are of great interest for the next generation nuclear physics facilities like Electron Ion Colliders. We report on GaAs photocathodes activated by Cs2Te, a material well known for its robustness. GaAs activated by Cs2Te forms Negative Electron Affinity, and the lifetime for extracted charge is improved by a factor of 5 compared to that of GaAs activated by Cs and O2. The spin polarization of photoelectrons was measured using a Mott polarimeter and found to be independent from the activation method, thereby shifting the paradigm on spin-polarized electron sources employing photocathodes with robust coatings.

  12. External skeletal robusticity of children and adolescents - European references from birth to adulthood and international comparisons.

    PubMed

    Mumm, Rebekka; Godina, Elena; Koziel, Slawomir; Musalek, Martin; Sedlak, Petr; Wittwer-Backofen, Ursula; Hesse, Volker; Dasgupta, Parasmani; Henneberg, Maciej; Scheffler, Christiane

    2018-06-11

    Background: In our modern world, the way of life in nutritional and activity behaviour has changed. As a consequence, parallel trends of an epidemic of overweight and a decline in external skeletal robusticity are observed in children and adolescents. Aim: We aim to develop reference centiles for external skeletal robusticity of European girls and boys aged 0 to 18 years using the Frame Index as an indicator and identify population specific age-related patterns. Methods: We analysed cross-sectional & longitudinal data on body height and elbow breadth of boys and girls from Europe (0-18 years, n = 41.679), India (7-18 years, n = 3.297) and South Africa (3-18 years, n = 4.346). As an indicator of external skeletal robusticity Frame Index after Frisancho (1990) was used. We developed centiles for boys and girls using the LMS-method and its extension. Results: Boys have greater external skeletal robusticity than girls. Whereas in girls Frame Index decreases continuously during growth, an increase of Frame Index from 12 to 16 years in European boys can be observed. Indian and South African boys are almost similar in Frame Index to European boys. In girls, the pattern is slightly different. Whereas South African girls are similar to European girls, Indian girls show a lesser external skeletal robusticity. Conclusion: Accurate references for external skeletal robusticity are needed to evaluate if skeletal development is adequate per age. They should be used to monitor effects of changes in way of life and physical activity levels in children and adolescents to avoid negative health outcomes like osteoporosis and arthrosis.

  13. Kids in the city study: research design and methodology.

    PubMed

    Oliver, Melody; Witten, Karen; Kearns, Robin A; Mavoa, Suzanne; Badland, Hannah M; Carroll, Penelope; Drumheller, Chelsea; Tavae, Nicola; Asiasiga, Lanuola; Jelley, Su; Kaiwai, Hector; Opit, Simon; Lin, En-Yi Judy; Sweetsur, Paul; Barnes, Helen Moewaka; Mason, Nic; Ergler, Christina

    2011-07-24

    Physical activity is essential for optimal physical and psychological health but substantial declines in children's activity levels have occurred in New Zealand and internationally. Children's independent mobility (i.e., outdoor play and traveling to destinations unsupervised), an integral component of physical activity in childhood, has also declined radically in recent decades. Safety-conscious parenting practices, car reliance and auto-centric urban design have converged to produce children living increasingly sedentary lives. This research investigates how urban neighborhood environments can support or enable or restrict children's independent mobility, thereby influencing physical activity accumulation and participation in daily life. The study is located in six Auckland, New Zealand neighborhoods, diverse in terms of urban design attributes, particularly residential density. Participants comprise 160 children aged 9-11 years and their parents/caregivers. Objective measures (global positioning systems, accelerometers, geographical information systems, observational audits) assessed children's independent mobility and physical activity, neighborhood infrastructure, and streetscape attributes. Parent and child neighborhood perceptions and experiences were assessed using qualitative research methods. This study is one of the first internationally to examine the association of specific urban design attributes with child independent mobility. Using robust, appropriate, and best practice objective measures, this study provides robust epidemiological information regarding the relationships between the built environment and health outcomes for this population.

  14. Standard cell electrical and physical variability analysis based on automatic physical measurement for design-for-manufacturing purposes

    NASA Astrophysics Data System (ADS)

    Shauly, Eitan; Parag, Allon; Khmaisy, Hafez; Krispil, Uri; Adan, Ofer; Levi, Shimon; Latinski, Sergey; Schwarzband, Ishai; Rotstein, Israel

    2011-04-01

    A fully automated system for process variability analysis of high density standard cell was developed. The system consists of layout analysis with device mapping: device type, location, configuration and more. The mapping step was created by a simple DRC run-set. This database was then used as an input for choosing locations for SEM images and for specific layout parameter extraction, used by SPICE simulation. This method was used to analyze large arrays of standard cell blocks, manufactured using Tower TS013LV (Low Voltage for high-speed applications) Platforms. Variability of different physical parameters like and like Lgate, Line-width-roughness and more as well as of electrical parameters like drive current (Ion), off current (Ioff) were calculated and statistically analyzed, in order to understand the variability root cause. Comparison between transistors having the same W/L but with different layout configurations and different layout environments (around the transistor) was made in terms of performances as well as process variability. We successfully defined "robust" and "less-robust" transistors configurations, and updated guidelines for Design-for-Manufacturing (DfM).

  15. Fire metrology: Current and future directions in physics-based measurements

    Treesearch

    Robert L. Kremens; Alistair M.S. Smith; Matthew B. Dickinson

    2010-01-01

    The robust evaluation of fire impacts on the biota, soil, and atmosphere requires measurement and analysis methods that can characterize combustion processes across a range of temporal and spatial scales. Numerous challenges are apparent in the literature. These challenges have led to novel research to quantify the 1) structure and heterogeneity of the pre-fire...

  16. Robust signals of future projections of Indian summer monsoon rainfall by IPCC AR5 climate models: Role of seasonal cycle and interannual variability

    NASA Astrophysics Data System (ADS)

    Jayasankar, C. B.; Surendran, Sajani; Rajendran, Kavirajan

    2015-05-01

    Coupled Model Intercomparison Project phase 5 (Fifth Assessment Report of Intergovernmental Panel on Climate Change) coupled global climate model Representative Concentration Pathway 8.5 simulations are analyzed to derive robust signals of projected changes in Indian summer monsoon rainfall (ISMR) and its variability. Models project clear future temperature increase but diverse changes in ISMR with substantial intermodel spread. Objective measures of interannual variability (IAV) yields nearly equal chance for future increase or decrease. This leads to discrepancy in quantifying changes in ISMR and variability. However, based primarily on the physical association between mean changes in ISMR and its IAV, and objective methods such as k-means clustering with Dunn's validity index, mean seasonal cycle, and reliability ensemble averaging, projections fall into distinct groups. Physically consistent groups of models with the highest reliability project future reduction in the frequency of light rainfall but increase in high to extreme rainfall and thereby future increase in ISMR by 0.74 ± 0.36 mm d-1, along with increased future IAV. These robust estimates of future changes are important for useful impact assessments.

  17. Moving Liquids with Sound: The Physics of Acoustic Droplet Ejection for Robust Laboratory Automation in Life Sciences.

    PubMed

    Hadimioglu, Babur; Stearns, Richard; Ellson, Richard

    2016-02-01

    Liquid handling instruments for life science applications based on droplet formation with focused acoustic energy or acoustic droplet ejection (ADE) were introduced commercially more than a decade ago. While the idea of "moving liquids with sound" was known in the 20th century, the development of precise methods for acoustic dispensing to aliquot life science materials in the laboratory began in earnest in the 21st century with the adaptation of the controlled "drop on demand" acoustic transfer of droplets from high-density microplates for high-throughput screening (HTS) applications. Robust ADE implementations for life science applications achieve excellent accuracy and precision by using acoustics first to sense the liquid characteristics relevant for its transfer, and then to actuate transfer of the liquid with customized application of sound energy to the given well and well fluid in the microplate. This article provides an overview of the physics behind ADE and its central role in both acoustical and rheological aspects of robust implementation of ADE in the life science laboratory and its broad range of ejectable materials. © 2015 Society for Laboratory Automation and Screening.

  18. Track and vertex reconstruction: From classical to adaptive methods

    NASA Astrophysics Data System (ADS)

    Strandlie, Are; Frühwirth, Rudolf

    2010-04-01

    This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.

  19. Robust Stabilization of Uncertain Systems Based on Energy Dissipation Concepts

    NASA Technical Reports Server (NTRS)

    Gupta, Sandeep

    1996-01-01

    Robust stability conditions obtained through generalization of the notion of energy dissipation in physical systems are discussed in this report. Linear time-invariant (LTI) systems which dissipate energy corresponding to quadratic power functions are characterized in the time-domain and the frequency-domain, in terms of linear matrix inequalities (LMls) and algebraic Riccati equations (ARE's). A novel characterization of strictly dissipative LTI systems is introduced in this report. Sufficient conditions in terms of dissipativity and strict dissipativity are presented for (1) stability of the feedback interconnection of dissipative LTI systems, (2) stability of dissipative LTI systems with memoryless feedback nonlinearities, and (3) quadratic stability of uncertain linear systems. It is demonstrated that the framework of dissipative LTI systems investigated in this report unifies and extends small gain, passivity, and sector conditions for stability. Techniques for selecting power functions for characterization of uncertain plants and robust controller synthesis based on these stability results are introduced. A spring-mass-damper example is used to illustrate the application of these methods for robust controller synthesis.

  20. Improving near-infrared prediction model robustness with support vector machine regression: a pharmaceutical tablet assay example.

    PubMed

    Igne, Benoît; Drennen, James K; Anderson, Carl A

    2014-01-01

    Changes in raw materials and process wear and tear can have significant effects on the prediction error of near-infrared calibration models. When the variability that is present during routine manufacturing is not included in the calibration, test, and validation sets, the long-term performance and robustness of the model will be limited. Nonlinearity is a major source of interference. In near-infrared spectroscopy, nonlinearity can arise from light path-length differences that can come from differences in particle size or density. The usefulness of support vector machine (SVM) regression to handle nonlinearity and improve the robustness of calibration models in scenarios where the calibration set did not include all the variability present in test was evaluated. Compared to partial least squares (PLS) regression, SVM regression was less affected by physical (particle size) and chemical (moisture) differences. The linearity of the SVM predicted values was also improved. Nevertheless, although visualization and interpretation tools have been developed to enhance the usability of SVM-based methods, work is yet to be done to provide chemometricians in the pharmaceutical industry with a regression method that can supplement PLS-based methods.

  1. Benchmarking of a treatment planning system for spot scanning proton therapy: Comparison and analysis of robustness to setup errors of photon IMRT and proton SFUD treatment plans of base of skull meningioma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, R., E-mail: ruth.harding2@wales.nhs.uk; Trnková, P.; Lomax, A. J.

    Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was tomore » benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.« less

  2. Projected Regression Methods for Inverting Fredholm Integrals: Formalism and Application to Analytical Continuation

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-Francois; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    We present a machine learning-based statistical regression approach to the inversion of Fredholm integrals of the first kind by studying an important example for the quantum materials community, the analytical continuation problem of quantum many-body physics. It involves reconstructing the frequency dependence of physical excitation spectra from data obtained at specific points in the complex frequency plane. The approach provides a natural regularization in cases where the inverse of the Fredholm kernel is ill-conditioned and yields robust error metrics. The stability of the forward problem permits the construction of a large database of input-output pairs. Machine learning methods applied to this database generate approximate solutions which are projected onto the subspace of functions satisfying relevant constraints. We show that for low input noise the method performs as well or better than Maximum Entropy (MaxEnt) under standard error metrics, and is substantially more robust to noise. We expect the methodology to be similarly effective for any problem involving a formally ill-conditioned inversion, provided that the forward problem can be efficiently solved. AJM was supported by the Office of Science of the U.S. Department of Energy under Subcontract No. 3F-3138 and LFA by the Columbia Univeristy IDS-ROADS project, UR009033-05 which also provided part support to RN and LH.

  3. Smooth Sensor Motion Planning for Robotic Cyber Physical Social Sensing (CPSS)

    PubMed Central

    Tang, Hong; Li, Liangzhi; Xiao, Nanfeng

    2017-01-01

    Although many researchers have begun to study the area of Cyber Physical Social Sensing (CPSS), few are focused on robotic sensors. We successfully utilize robots in CPSS, and propose a sensor trajectory planning method in this paper. Trajectory planning is a fundamental problem in mobile robotics. However, traditional methods are not suited for robotic sensors, because of their low efficiency, instability, and non-smooth-generated paths. This paper adopts an optimizing function to generate several intermediate points and regress these discrete points to a quintic polynomial which can output a smooth trajectory for the robotic sensor. Simulations demonstrate that our approach is robust and efficient, and can be well applied in the CPSS field. PMID:28218649

  4. An adaptive discontinuous Galerkin solver for aerodynamic flows

    NASA Astrophysics Data System (ADS)

    Burgess, Nicholas K.

    This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.

  5. Improving estimates of the number of `fake' leptons and other mis-reconstructed objects in hadron collider events: BoB's your UNCLE

    NASA Astrophysics Data System (ADS)

    Gillam, Thomas P. S.; Lester, Christopher G.

    2014-11-01

    We consider current and alternative approaches to setting limits on new physics signals having backgrounds from misidentified objects; for example jets misidentified as leptons, b-jets or photons. Many ATLAS and CMS analyses have used a heuristic "matrix method" for estimating the background contribution from such sources. We demonstrate that the matrix method suffers from statistical shortcomings that can adversely affect its ability to set robust limits. A rigorous alternative method is discussed, and is seen to produce fake rate estimates and limits with better qualities, but is found to be too costly to use. Having investigated the nature of the approximations used to derive the matrix method, we propose a third strategy that is seen to marry the speed of the matrix method to the performance and physicality of the more rigorous approach.

  6. Leveraging Anderson Acceleration for improved convergence of iterative solutions to transport systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willert, Jeffrey; Taitano, William T.; Knoll, Dana

    In this note we demonstrate that using Anderson Acceleration (AA) in place of a standard Picard iteration can not only increase the convergence rate but also make the iteration more robust for two transport applications. We also compare the convergence acceleration provided by AA to that provided by moment-based acceleration methods. Additionally, we demonstrate that those two acceleration methods can be used together in a nested fashion. We begin by describing the AA algorithm. At this point, we will describe two application problems, one from neutronics and one from plasma physics, on which we will apply AA. We provide computationalmore » results which highlight the benefits of using AA, namely that we can compute solutions using fewer function evaluations, larger time-steps, and achieve a more robust iteration.« less

  7. A second-order cell-centered Lagrangian ADER-MOOD finite volume scheme on multidimensional unstructured meshes for hydrodynamics

    NASA Astrophysics Data System (ADS)

    Boscheri, Walter; Dumbser, Michael; Loubère, Raphaël; Maire, Pierre-Henri

    2018-04-01

    In this paper we develop a conservative cell-centered Lagrangian finite volume scheme for the solution of the hydrodynamics equations on unstructured multidimensional grids. The method is derived from the Eucclhyd scheme discussed in [47,43,45]. It is second-order accurate in space and is combined with the a posteriori Multidimensional Optimal Order Detection (MOOD) limiting strategy to ensure robustness and stability at shock waves. Second-order of accuracy in time is achieved via the ADER (Arbitrary high order schemes using DERivatives) approach. A large set of numerical test cases is proposed to assess the ability of the method to achieve effective second order of accuracy on smooth flows, maintaining an essentially non-oscillatory behavior on discontinuous profiles, general robustness ensuring physical admissibility of the numerical solution, and precision where appropriate.

  8. Convergence Rates of Best N-term Galerkin Approximations for a Class of Elliptic sPDEs

    DTIC Science & Technology

    2010-05-31

    Todor , Karhúnen-Loève Approximation of Random Fields by General- ized Fast Multipole Methods, Journal of Computational Physics 217(2006), 100–122. [19...20] R. Todor , Robust eigenvalue computation for smoothing operators, SIAM J. Num. Anal. 44(2006), 865–878. 29 [21] R. Todor and Ch. Schwab, Convergence

  9. A Personalized QoS Prediction Approach for CPS Service Recommendation Based on Reputation and Location-Aware Collaborative Filtering.

    PubMed

    Kuang, Li; Yu, Long; Huang, Lan; Wang, Yin; Ma, Pengju; Li, Chuanbin; Zhu, Yujia

    2018-05-14

    With the rapid development of cyber-physical systems (CPS), building cyber-physical systems with high quality of service (QoS) has become an urgent requirement in both academia and industry. During the procedure of building Cyber-physical systems, it has been found that a large number of functionally equivalent services exist, so it becomes an urgent task to recommend suitable services from the large number of services available in CPS. However, since it is time-consuming, and even impractical, for a single user to invoke all of the services in CPS to experience their QoS, a robust QoS prediction method is needed to predict unknown QoS values. A commonly used method in QoS prediction is collaborative filtering, however, it is hard to deal with the data sparsity and cold start problem, and meanwhile most of the existing methods ignore the data credibility issue. Thence, in order to solve both of these challenging problems, in this paper, we design a framework of QoS prediction for CPS services, and propose a personalized QoS prediction approach based on reputation and location-aware collaborative filtering. Our approach first calculates the reputation of users by using the Dirichlet probability distribution, so as to identify untrusted users and process their unreliable data, and then it digs out the geographic neighborhood in three levels to improve the similarity calculation of users and services. Finally, the data from geographical neighbors of users and services are fused to predict the unknown QoS values. The experiments using real datasets show that our proposed approach outperforms other existing methods in terms of accuracy, efficiency, and robustness.

  10. A Personalized QoS Prediction Approach for CPS Service Recommendation Based on Reputation and Location-Aware Collaborative Filtering

    PubMed Central

    Huang, Lan; Wang, Yin; Ma, Pengju; Li, Chuanbin; Zhu, Yujia

    2018-01-01

    With the rapid development of cyber-physical systems (CPS), building cyber-physical systems with high quality of service (QoS) has become an urgent requirement in both academia and industry. During the procedure of building Cyber-physical systems, it has been found that a large number of functionally equivalent services exist, so it becomes an urgent task to recommend suitable services from the large number of services available in CPS. However, since it is time-consuming, and even impractical, for a single user to invoke all of the services in CPS to experience their QoS, a robust QoS prediction method is needed to predict unknown QoS values. A commonly used method in QoS prediction is collaborative filtering, however, it is hard to deal with the data sparsity and cold start problem, and meanwhile most of the existing methods ignore the data credibility issue. Thence, in order to solve both of these challenging problems, in this paper, we design a framework of QoS prediction for CPS services, and propose a personalized QoS prediction approach based on reputation and location-aware collaborative filtering. Our approach first calculates the reputation of users by using the Dirichlet probability distribution, so as to identify untrusted users and process their unreliable data, and then it digs out the geographic neighborhood in three levels to improve the similarity calculation of users and services. Finally, the data from geographical neighbors of users and services are fused to predict the unknown QoS values. The experiments using real datasets show that our proposed approach outperforms other existing methods in terms of accuracy, efficiency, and robustness. PMID:29757995

  11. Skeletal robustness and bone strength as measured by anthropometry and ultrasonography as a function of physical activity in young adults.

    PubMed

    Scheffler, Christiane; Gniosdorz, Birgit; Staub, Kaspar; Rühli, Frank

    2014-01-01

    During the last 10 years, skeletal robustness in children has generally decreased. The reasons for this phenomenon, as well as its outcomes, are undetermined so far. The present study explores the association between anthropometric skeletal measurements, bone quality measurements, and physical activity in young adults. 118 German young men (N = 68; 19-25 years old) and women (N = 50; 19-24 years old) were investigated by anthropometric methods (i.e., height, weight, shoulder, elbow breadth, and pelvic breadth) and quantitative ultrasound measurement (QUS). Strength and stability of Os calcis have been determined by speed of sound (in m/s) and broadband ultrasound attenuation (in dB/Mhz); individual physical activity was analyzed by a pedometer and by questionnaire. The results show a correlation between sports hours per week and bone quality index in males. But no correlation exists between anthropometric data and QUSs for either sexes, as well as no correlation between total steps per day and internal bone quality or external bone dimensions. These results are discussed in the context of generally decreasing physical activity, the outcomes of prevention programs as well as evolutionary adaptation of human phenotypic plasticity in a changing environment. Copyright © 2014 Wiley Periodicals, Inc.

  12. A robust interpolation method for constructing digital elevation models from remote sensing data

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Liu, Fengying; Li, Yanyan; Yan, Changqing; Liu, Guolin

    2016-09-01

    A digital elevation model (DEM) derived from remote sensing data often suffers from outliers due to various reasons such as the physical limitation of sensors and low contrast of terrain textures. In order to reduce the effect of outliers on DEM construction, a robust algorithm of multiquadric (MQ) methodology based on M-estimators (MQ-M) was proposed. MQ-M adopts an adaptive weight function with three-parts. The weight function is null for large errors, one for small errors and quadric for others. A mathematical surface was employed to comparatively analyze the robustness of MQ-M, and its performance was compared with those of the classical MQ and a recently developed robust MQ method based on least absolute deviation (MQ-L). Numerical tests show that MQ-M is comparative to the classical MQ and superior to MQ-L when sample points follow normal and Laplace distributions, and under the presence of outliers the former is more accurate than the latter. A real-world example of DEM construction using stereo images indicates that compared with the classical interpolation methods, such as natural neighbor (NN), ordinary kriging (OK), ANUDEM, MQ-L and MQ, MQ-M has a better ability of preserving subtle terrain features. MQ-M replaces thin plate spline for reference DEM construction to assess the contribution to our recently developed multiresolution hierarchical classification method (MHC). Classifying the 15 groups of benchmark datasets provided by the ISPRS Commission demonstrates that MQ-M-based MHC is more accurate than MQ-L-based and TPS-based MHCs. MQ-M has high potential for DEM construction.

  13. Multi-wavelength approach towards on-product overlay accuracy and robustness

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Kaustuve; Noot, Marc; Chang, Hammer; Liao, Sax; Chang, Ken; Gosali, Benny; Su, Eason; Wang, Cathy; den Boef, Arie; Fouquet, Christophe; Huang, Guo-Tsai; Chen, Kai-Hsiung; Cheng, Kevin; Lin, John

    2018-03-01

    Success of diffraction-based overlay (DBO) technique1,4,5 in the industry is not just for its good precision and low toolinduced shift, but also for the measurement accuracy2 and robustness that DBO can provide. Significant efforts are put in to capitalize on the potential that DBO has to address measurement accuracy and robustness. Introduction of many measurement wavelength choices (continuous wavelength) in DBO is one of the key new capabilities in this area. Along with the continuous choice of wavelengths, the algorithms (fueled by swing-curve physics) on how to use these wavelengths are of high importance for a robust recipe setup that can avoid the impact from process stack variations (symmetric as well as asymmetric). All these are discussed. Moreover, another aspect of boosting measurement accuracy and robustness is discussed that deploys the capability to combine overlay measurement data from multiple wavelength measurements. The goal is to provide a method to make overlay measurements immune from process stack variations and also to report health KPIs for every measurement. By combining measurements from multiple wavelengths, a final overlay measurement is generated. The results show a significant benefit in accuracy and robustness against process stack variation. These results are supported by both measurement data as well as simulation from many product stacks.

  14. Takagi-Sugeno fuzzy model based robust dissipative control for uncertain flexible spacecraft with saturated time-delay input.

    PubMed

    Xu, Shidong; Sun, Guanghui; Sun, Weichao

    2017-01-01

    In this paper, the problem of robust dissipative control is investigated for uncertain flexible spacecraft based on Takagi-Sugeno (T-S) fuzzy model with saturated time-delay input. Different from most existing strategies, T-S fuzzy approximation approach is used to model the nonlinear dynamics of flexible spacecraft. Simultaneously, the physical constraints of system, like input delay, input saturation, and parameter uncertainties, are also taken care of in the fuzzy model. By employing Lyapunov-Krasovskii method and convex optimization technique, a novel robust controller is proposed to implement rest-to-rest attitude maneuver for flexible spacecraft, and the guaranteed dissipative performance enables the uncertain closed-loop system to reject the influence of elastic vibrations and external disturbances. Finally, an illustrative design example integrated with simulation results are provided to confirm the applicability and merits of the developed control strategy. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Efficient robust reconstruction of dynamic PET activity maps with radioisotope decay constraints.

    PubMed

    Gao, Fei; Liu, Huafeng; Shi, Pengcheng

    2010-01-01

    Dynamic PET imaging performs sequence of data acquisition in order to provide visualization and quantification of physiological changes in specific tissues and organs. The reconstruction of activity maps is generally the first step in dynamic PET. State space Hinfinity approaches have been proved to be a robust method for PET image reconstruction where, however, temporal constraints are not considered during the reconstruction process. In addition, the state space strategies for PET image reconstruction have been computationally prohibitive for practical usage because of the need for matrix inversion. In this paper, we present a minimax formulation of the dynamic PET imaging problem where a radioisotope decay model is employed as physics-based temporal constraints on the photon counts. Furthermore, a robust steady state Hinfinity filter is developed to significantly improve the computational efficiency with minimal loss of accuracy. Experiments are conducted on Monte Carlo simulated image sequences for quantitative analysis and validation.

  16. Networking—a statistical physics perspective

    NASA Astrophysics Data System (ADS)

    Yeung, Chi Ho; Saad, David

    2013-03-01

    Networking encompasses a variety of tasks related to the communication of information on networks; it has a substantial economic and societal impact on a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption requires new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with nonlinear large-scale systems. This review aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications.

  17. Robust crossfeed design for hovering rotorcraft. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Catapang, David R.

    1993-01-01

    Control law design for rotorcraft fly-by-wire systems normally attempts to decouple angular responses using fixed-gain crossfeeds. This approach can lead to poor decoupling over the frequency range of pilot inputs and increase the load on the feedback loops. In order to improve the decoupling performance, dynamic crossfeeds may be adopted. Moreover, because of the large changes that occur in rotorcraft dynamics due to small changes about the nominal design condition, especially for near-hovering flight, the crossfeed design must be 'robust.' A new low-order matching method is presented here to design robost crossfeed compensators for multi-input, multi-output (MIMO) systems. The technique identifies degrees-of-freedom that can be decoupled using crossfeeds, given an anticipated set of parameter variations for the range of flight conditions of concern. Cross-coupling is then reduced for degrees-of-freedom that can use crossfeed compensation by minimizing off-axis response magnitude average and variance. Results are presented for the analysis of pitch, roll, yaw, and heave coupling of the UH-60 Black Hawk helicopter in near-hovering flight. Robust crossfeeds are designed that show significant improvement in decoupling performance and robustness over nominal, single design point, compensators. The design method and results are presented in an easily-used graphical format that lends significant physical insight to the design procedure. This plant pre-compensation technique is an appropriate preliminary step to the design of robust feedback control laws for rotorcraft.

  18. Mixed finite element - discontinuous finite volume element discretization of a general class of multicontinuum models

    NASA Astrophysics Data System (ADS)

    Ruiz-Baier, Ricardo; Lunati, Ivan

    2016-10-01

    We present a novel discretization scheme tailored to a class of multiphase models that regard the physical system as consisting of multiple interacting continua. In the framework of mixture theory, we consider a general mathematical model that entails solving a system of mass and momentum equations for both the mixture and one of the phases. The model results in a strongly coupled and nonlinear system of partial differential equations that are written in terms of phase and mixture (barycentric) velocities, phase pressure, and saturation. We construct an accurate, robust and reliable hybrid method that combines a mixed finite element discretization of the momentum equations with a primal discontinuous finite volume-element discretization of the mass (or transport) equations. The scheme is devised for unstructured meshes and relies on mixed Brezzi-Douglas-Marini approximations of phase and total velocities, on piecewise constant elements for the approximation of phase or total pressures, as well as on a primal formulation that employs discontinuous finite volume elements defined on a dual diamond mesh to approximate scalar fields of interest (such as volume fraction, total density, saturation, etc.). As the discretization scheme is derived for a general formulation of multicontinuum physical systems, it can be readily applied to a large class of simplified multiphase models; on the other, the approach can be seen as a generalization of these models that are commonly encountered in the literature and employed when the latter are not sufficiently accurate. An extensive set of numerical test cases involving two- and three-dimensional porous media are presented to demonstrate the accuracy of the method (displaying an optimal convergence rate), the physics-preserving properties of the mixed-primal scheme, as well as the robustness of the method (which is successfully used to simulate diverse physical phenomena such as density fingering, Terzaghi's consolidation, deformation of a cantilever bracket, and Boycott effects). The applicability of the method is not limited to flow in porous media, but can also be employed to describe many other physical systems governed by a similar set of equations, including e.g. multi-component materials.

  19. Exponential integrators in time-dependent density-functional calculations

    NASA Astrophysics Data System (ADS)

    Kidd, Daniel; Covington, Cody; Varga, Kálmán

    2017-12-01

    The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.

  20. Performance Evaluation of Localization Accuracy for a Log-Normal Shadow Fading Wireless Sensor Network under Physical Barrier Attacks

    PubMed Central

    Abdulqader Hussein, Ahmed; Rahman, Tharek A.; Leow, Chee Yen

    2015-01-01

    Localization is an apparent aspect of a wireless sensor network, which is the focus of much interesting research. One of the severe conditions that needs to be taken into consideration is localizing a mobile target through a dispersed sensor network in the presence of physical barrier attacks. These attacks confuse the localization process and cause location estimation errors. Range-based methods, like the received signal strength indication (RSSI), face the major influence of this kind of attack. This paper proposes a solution based on a combination of multi-frequency multi-power localization (C-MFMPL) and step function multi-frequency multi-power localization (SF-MFMPL), including the fingerprint matching technique and lateration, to provide a robust and accurate localization technique. In addition, this paper proposes a grid coloring algorithm to detect the signal hole map in the network, which refers to the attack-prone regions, in order to carry out corrective actions. The simulation results show the enhancement and robustness of RSS localization performance in the face of log normal shadow fading effects, besides the presence of physical barrier attacks, through detecting, filtering and eliminating the effect of these attacks. PMID:26690159

  1. Performance Evaluation of Localization Accuracy for a Log-Normal Shadow Fading Wireless Sensor Network under Physical Barrier Attacks.

    PubMed

    Hussein, Ahmed Abdulqader; Rahman, Tharek A; Leow, Chee Yen

    2015-12-04

    Localization is an apparent aspect of a wireless sensor network, which is the focus of much interesting research. One of the severe conditions that needs to be taken into consideration is localizing a mobile target through a dispersed sensor network in the presence of physical barrier attacks. These attacks confuse the localization process and cause location estimation errors. Range-based methods, like the received signal strength indication (RSSI), face the major influence of this kind of attack. This paper proposes a solution based on a combination of multi-frequency multi-power localization (C-MFMPL) and step function multi-frequency multi-power localization (SF-MFMPL), including the fingerprint matching technique and lateration, to provide a robust and accurate localization technique. In addition, this paper proposes a grid coloring algorithm to detect the signal hole map in the network, which refers to the attack-prone regions, in order to carry out corrective actions. The simulation results show the enhancement and robustness of RSS localization performance in the face of log normal shadow fading effects, besides the presence of physical barrier attacks, through detecting, filtering and eliminating the effect of these attacks.

  2. The Robustness of Pre-School Children's Tendency to Count Discrete Physical Objects

    ERIC Educational Resources Information Center

    Fletcher, Ben; Pine, Karen J.

    2009-01-01

    When pre-school children count an array of objects containing one that is broken in half, most count the halves as two separate objects. Two studies explore this predisposition to count discrete physical objects (DPOs) and investigate its robustness in the face of various manipulations. In Experiment 1, 32 children aged three-four years counted…

  3. Tire Force Estimation using a Proportional Integral Observer

    NASA Astrophysics Data System (ADS)

    Farhat, Ahmad; Koenig, Damien; Hernandez-Alcantara, Diana; Morales-Menendez, Ruben

    2017-01-01

    This paper addresses a method for detecting critical stability situations in the lateral vehicle dynamics by estimating the non-linear part of the tire forces. These forces indicate the road holding performance of the vehicle. The estimation method is based on a robust fault detection and estimation approach which minimize the disturbance and uncertainties to residual sensitivity. It consists in the design of a Proportional Integral Observer (PIO), while minimizing the well known H ∞ norm for the worst case uncertainties and disturbance attenuation, and combining a transient response specification. This multi-objective problem is formulated as a Linear Matrix Inequalities (LMI) feasibility problem where a cost function subject to LMI constraints is minimized. This approach is employed to generate a set of switched robust observers for uncertain switched systems, where the convergence of the observer is ensured using a Multiple Lyapunov Function (MLF). Whilst the forces to be estimated can not be physically measured, a simulation scenario with CarSimTM is presented to illustrate the developed method.

  4. Thick electrodes including nanoparticles having electroactive materials and methods of making same

    DOEpatents

    Xiao, Jie; Lu, Dongping; Liu, Jun; Zhang, Jiguang; Graff, Gordon L.

    2017-02-21

    Electrodes having nanostructure and/or utilizing nanoparticles of active materials and having high mass loadings of the active materials can be made to be physically robust and free of cracks and pinholes. The electrodes include nanoparticles having electroactive material, which nanoparticles are aggregated with carbon into larger secondary particles. The secondary particles can be bound with a binder to form the electrode.

  5. Guide to NavyFOAM V1.0

    DTIC Science & Technology

    2011-04-01

    NavyFOAM has been developed using an open-source CFD software tool-kit ( OpenFOAM ) that draws heavily upon object-oriented programming. The...numerical methods and the physical models in the original version of OpenFOAM have been upgraded in an effort to improve accuracy and robustness of...computational fluid dynamics OpenFOAM , Object Oriented Programming (OOP) (CFD), NavyFOAM, 16. SECURITY CLASSIFICATION OF: a. REPORT UNCLASSIFIED b

  6. Automated Detection of Solar Loops by the Oriented Connectivity Method

    NASA Technical Reports Server (NTRS)

    Lee, Jong Kwan; Newman, Timothy S.; Gary, G. Allen

    2004-01-01

    An automated technique to segment solar coronal loops from intensity images of the Sun s corona is introduced. It exploits physical characteristics of the solar magnetic field to enable robust extraction from noisy images. The technique is a constructive curve detection approach, constrained by collections of estimates of the magnetic fields orientation. Its effectiveness is evaluated through experiments on synthetic and real coronal images.

  7. A framework for qualitative reasoning about solid objects

    NASA Technical Reports Server (NTRS)

    Davis, E.

    1987-01-01

    Predicting the behavior of a qualitatively described system of solid objects requires a combination of geometrical, temporal, and physical reasoning. Methods based upon formulating and solving differential equations are not adequate for robust prediction, since the behavior of a system over extended time may be much simpler than its behavior over local time. A first-order logic, in which one can state simple physical problems and derive their solution deductively, without recourse to solving the differential equations, is discussed. This logic is substantially more expressive and powerful than any previous AI representational system in this domain.

  8. A Bayesian blind survey for cold molecular gas in the Universe

    NASA Astrophysics Data System (ADS)

    Lentati, L.; Carilli, C.; Alexander, P.; Walter, F.; Decarli, R.

    2014-10-01

    A new Bayesian method for performing an image domain search for line-emitting galaxies is presented. The method uses both spatial and spectral information to robustly determine the source properties, employing either simple Gaussian, or other physically motivated models whilst using the evidence to determine the probability that the source is real. In this paper, we describe the method, and its application to both a simulated data set, and a blind survey for cold molecular gas using observations of the Hubble Deep Field-North taken with the Plateau de Bure Interferometer. We make a total of six robust detections in the survey, five of which have counterparts in other observing bands. We identify the most secure detections found in a previous investigation, while finding one new probable line source with an optical ID not seen in the previous analysis. This study acts as a pilot application of Bayesian statistics to future searches to be carried out both for low-J CO transitions of high-redshift galaxies using the Jansky Very Large Array (JVLA), and at millimetre wavelengths with Atacama Large Millimeter/submillimeter Array (ALMA), enabling the inference of robust scientific conclusions about the history of the molecular gas properties of star-forming galaxies in the Universe through cosmic time.

  9. BAYESIAN ESTIMATION OF THERMONUCLEAR REACTION RATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliadis, C.; Anderson, K. S.; Coc, A.

    The problem of estimating non-resonant astrophysical S -factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied to this problem in the past, almost all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extrasolar planets, gravitational waves, and Type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We presentmore » astrophysical S -factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the reactions d(p, γ ){sup 3}He, {sup 3}He({sup 3}He,2p){sup 4}He, and {sup 3}He( α , γ ){sup 7}Be, important for deuterium burning, solar neutrinos, and Big Bang nucleosynthesis.« less

  10. Robust optimal design of diffusion-weighted magnetic resonance experiments for skin microcirculation

    NASA Astrophysics Data System (ADS)

    Choi, J.; Raguin, L. G.

    2010-10-01

    Skin microcirculation plays an important role in several diseases including chronic venous insufficiency and diabetes. Magnetic resonance (MR) has the potential to provide quantitative information and a better penetration depth compared with other non-invasive methods such as laser Doppler flowmetry or optical coherence tomography. The continuous progress in hardware resulting in higher sensitivity must be coupled with advances in data acquisition schemes. In this article, we first introduce a physical model for quantifying skin microcirculation using diffusion-weighted MR (DWMR) based on an effective dispersion model for skin leading to a q-space model of the DWMR complex signal, and then design the corresponding robust optimal experiments. The resulting robust optimal DWMR protocols improve the worst-case quality of parameter estimates using nonlinear least squares optimization by exploiting available a priori knowledge of model parameters. Hence, our approach optimizes the gradient strengths and directions used in DWMR experiments to robustly minimize the size of the parameter estimation error with respect to model parameter uncertainty. Numerical evaluations are presented to demonstrate the effectiveness of our approach as compared to conventional DWMR protocols.

  11. The Voronoi Implicit Interface Method for computing multiphase physics.

    PubMed

    Saye, Robert I; Sethian, James A

    2011-12-06

    We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method's accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann's law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.

  12. Robust digital image watermarking using distortion-compensated dither modulation

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Yuan, Xiaochen

    2018-04-01

    In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.

  13. Physically motivated global alignment method for electron tomography

    DOE PAGES

    Sanders, Toby; Prange, Micah; Akatay, Cem; ...

    2015-04-08

    Electron tomography is widely used for nanoscale determination of 3-D structures in many areas of science. Determining the 3-D structure of a sample from electron tomography involves three major steps: acquisition of sequence of 2-D projection images of the sample with the electron microscope, alignment of the images to a common coordinate system, and 3-D reconstruction and segmentation of the sample from the aligned image data. The resolution of the 3-D reconstruction is directly influenced by the accuracy of the alignment, and therefore, it is crucial to have a robust and dependable alignment method. In this paper, we develop amore » new alignment method which avoids the use of markers and instead traces the computed paths of many identifiable ‘local’ center-of-mass points as the sample is rotated. Compared with traditional correlation schemes, the alignment method presented here is resistant to cumulative error observed from correlation techniques, has very rigorous mathematical justification, and is very robust since many points and paths are used, all of which inevitably improves the quality of the reconstruction and confidence in the scientific results.« less

  14. Retrieving the aerosol lidar ratio profile by combining ground- and space-based elastic lidars.

    PubMed

    Feiyue, Mao; Wei, Gong; Yingying, Ma

    2012-02-15

    The aerosol lidar ratio is a key parameter for the retrieval of aerosol optical properties from elastic lidar, which changes largely for aerosols with different chemical and physical properties. We proposed a method for retrieving the aerosol lidar ratio profile by combining simultaneous ground- and space-based elastic lidars. The method was tested by a simulated case and a real case at 532 nm wavelength. The results demonstrated that our method is robust and can obtain accurate lidar ratio and extinction coefficient profiles. Our method can be useful for determining the local and global lidar ratio and validating space-based lidar datasets.

  15. Efficient solution of the simplified P N equations

    DOE PAGES

    Hamilton, Steven P.; Evans, Thomas M.

    2014-12-23

    We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.

  16. Usual Physical Activity and Hip Fracture in Older Men: An Application of Semiparametric Methods to Observational Data

    PubMed Central

    Mackey, Dawn C.; Hubbard, Alan E.; Cawthon, Peggy M.; Cauley, Jane A.; Cummings, Steven R.; Tager, Ira B.

    2011-01-01

    Few studies have examined the relation between usual physical activity level and rate of hip fracture in older men or applied semiparametric methods from the causal inference literature that estimate associations without assuming a particular parametric model. Using the Physical Activity Scale for the Elderly, the authors measured usual physical activity level at baseline (2000–2002) in 5,682 US men ≥65 years of age who were enrolled in the Osteoporotic Fractures in Men Study. Physical activity levels were classified as low (bottom quartile of Physical Activity Scale for the Elderly score), moderate (middle quartiles), or high (top quartile). Hip fractures were confirmed by central review. Marginal associations between physical activity and hip fracture were estimated with 3 estimation methods: inverse probability-of-treatment weighting, G-computation, and doubly robust targeted maximum likelihood estimation. During 6.5 years of follow-up, 95 men (1.7%) experienced a hip fracture. The unadjusted risk of hip fracture was lower in men with a high physical activity level versus those with a low physical activity level (relative risk = 0.51, 95% confidence interval: 0.28, 0.92). In semiparametric analyses that controlled confounding, hip fracture risk was not lower with moderate (e.g., targeted maximum likelihood estimation relative risk = 0.92, 95% confidence interval: 0.62, 1.44) or high (e.g., targeted maximum likelihood estimation relative risk = 0.88, 95% confidence interval: 0.53, 2.03) physical activity relative to low. This study does not support a protective effect of usual physical activity on hip fracture in older men. PMID:21303805

  17. The Voronoi Implicit Interface Method for computing multiphase physics

    PubMed Central

    Saye, Robert I.; Sethian, James A.

    2011-01-01

    We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces. PMID:22106269

  18. The Voronoi Implicit Interface Method for computing multiphase physics

    DOE PAGES

    Saye, Robert I.; Sethian, James A.

    2011-11-21

    In this paper, we introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarilymore » high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. Finally, we test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.« less

  19. Quantifying distinct associations on different temporal scales: comparison of DCCA and Pearson methods

    NASA Astrophysics Data System (ADS)

    Piao, Lin; Fu, Zuntao

    2016-11-01

    Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.

  20. Quantifying distinct associations on different temporal scales: comparison of DCCA and Pearson methods.

    PubMed

    Piao, Lin; Fu, Zuntao

    2016-11-09

    Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.

  1. Evaluating Remapped Physical Reach for Hand Interactions with Passive Haptics in Virtual Reality.

    PubMed

    Han, Dustin T; Suhail, Mohamed; Ragan, Eric D

    2018-04-01

    Virtual reality often uses motion tracking to incorporate physical hand movements into interaction techniques for selection and manipulation of virtual objects. To increase realism and allow direct hand interaction, real-world physical objects can be aligned with virtual objects to provide tactile feedback and physical grasping. However, unless a physical space is custom configured to match a specific virtual reality experience, the ability to perfectly match the physical and virtual objects is limited. Our research addresses this challenge by studying methods that allow one physical object to be mapped to multiple virtual objects that can exist at different virtual locations in an egocentric reference frame. We study two such techniques: one that introduces a static translational offset between the virtual and physical hand before a reaching action, and one that dynamically interpolates the position of the virtual hand during a reaching motion. We conducted two experiments to assess how the two methods affect reaching effectiveness, comfort, and ability to adapt to the remapping techniques when reaching for objects with different types of mismatches between physical and virtual locations. We also present a case study to demonstrate how the hand remapping techniques could be used in an immersive game application to support realistic hand interaction while optimizing usability. Overall, the translational technique performed better than the interpolated reach technique and was more robust for situations with larger mismatches between virtual and physical objects.

  2. Angular velocity of gravitational radiation from precessing binaries and the corotating frame

    NASA Astrophysics Data System (ADS)

    Boyle, Michael

    2013-05-01

    This paper defines an angular velocity for time-dependent functions on the sphere and applies it to gravitational waveforms from compact binaries. Because it is geometrically meaningful and has a clear physical motivation, the angular velocity is uniquely useful in helping to solve an important—and largely ignored—problem in models of compact binaries: the inverse problem of deducing the physical parameters of a system from the gravitational waves alone. It is also used to define the corotating frame of the waveform. When decomposed in this frame, the waveform has no rotational dynamics and is therefore as slowly evolving as possible. The resulting simplifications lead to straightforward methods for accurately comparing waveforms and constructing hybrids. As formulated in this paper, the methods can be applied robustly to both precessing and nonprecessing waveforms, providing a clear, comprehensive, and consistent framework for waveform analysis. Explicit implementations of all these methods are provided in accompanying computer code.

  3. CVD-MPFA full pressure support, coupled unstructured discrete fracture-matrix Darcy-flux approximations

    NASA Astrophysics Data System (ADS)

    Ahmed, Raheel; Edwards, Michael G.; Lamine, Sadok; Huisman, Bastiaan A. H.; Pal, Mayur

    2017-11-01

    Two novel control-volume methods are presented for flow in fractured media, and involve coupling the control-volume distributed multi-point flux approximation (CVD-MPFA) constructed with full pressure support (FPS), to two types of discrete fracture-matrix approximation for simulation on unstructured grids; (i) involving hybrid grids and (ii) a lower dimensional fracture model. Flow is governed by Darcy's law together with mass conservation both in the matrix and the fractures, where large discontinuities in permeability tensors can occur. Finite-volume FPS schemes are more robust than the earlier CVD-MPFA triangular pressure support (TPS) schemes for problems involving highly anisotropic homogeneous and heterogeneous full-tensor permeability fields. We use a cell-centred hybrid-grid method, where fractures are modelled by lower-dimensional interfaces between matrix cells in the physical mesh but expanded to equi-dimensional cells in the computational domain. We present a simple procedure to form a consistent hybrid-grid locally for a dual-cell. We also propose a novel hybrid-grid for intersecting fractures, for the FPS method, which reduces the condition number of the global linear system and leads to larger time steps for tracer transport. The transport equation for tracer flow is coupled with the pressure equation and provides flow parameter assessment of the fracture models. Transport results obtained via TPS and FPS hybrid-grid formulations are compared with the corresponding results of fine-scale explicit equi-dimensional formulations. The results show that the hybrid-grid FPS method applies to general full-tensor fields and provides improved robust approximations compared to the hybrid-grid TPS method for fractured domains, for both weakly anisotropic permeability fields and very strong anisotropic full-tensor permeability fields where the TPS scheme exhibits spurious oscillations. The hybrid-grid FPS formulation is extended to compressible flow and the results demonstrate the method is also robust for transient flow. Furthermore, we present FPS coupled with a lower-dimensional fracture model, where fractures are strictly lower-dimensional in the physical mesh as well as in the computational domain. We present a comparison of the hybrid-grid FPS method and the lower-dimensional fracture model for several cases of isotropic and anisotropic fractured media which illustrate the benefits of the respective methods.

  4. Robust Control Design for Uncertain Nonlinear Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis G.; Andrews, Lindsey; Giesy, Daniel P.

    2012-01-01

    Robustness to parametric uncertainty is fundamental to successful control system design and as such it has been at the core of many design methods developed over the decades. Despite its prominence, most of the work on robust control design has focused on linear models and uncertainties that are non-probabilistic in nature. Recently, researchers have acknowledged this disparity and have been developing theory to address a broader class of uncertainties. This paper presents an experimental application of robust control design for a hybrid class of probabilistic and non-probabilistic parametric uncertainties. The experimental apparatus is based upon the classic inverted pendulum on a cart. The physical uncertainty is realized by a known additional lumped mass at an unknown location on the pendulum. This unknown location has the effect of substantially altering the nominal frequency and controllability of the nonlinear system, and in the limit has the capability to make the system neutrally stable and uncontrollable. Another uncertainty to be considered is a direct current motor parameter. The control design objective is to design a controller that satisfies stability, tracking error, control power, and transient behavior requirements for the largest range of parametric uncertainties. This paper presents an overview of the theory behind the robust control design methodology and the experimental results.

  5. An entrepreneurial physics method and its experimental test

    NASA Astrophysics Data System (ADS)

    Brown, Robert

    2012-02-01

    As faculty in a master's program for entrepreneurial physics and in an applied physics PhD program, I have advised upwards of 40 master and doctoral theses in industrial physics. I have been closely involved with four robust start-up manufacturing companies focused on physics high-technology and I have spent 30 years collaborating with industrial physicists on research and development. Thus I am in a position to reflect on many articles and advice columns centered on entrepreneurship. What about the goals, strategies, resources, skills, and the 10,000 hours needed to be an entrepreneur? What about business plans, partners, financing, patents, networking, salesmanship and regulatory affairs? What about learning new technology, how to solve problems and, in fact, learning innovation itself? At this point, I have my own method to propose to physicists in academia for incorporating entrepreneurship into their research lives. With this method, we do not start with a major invention or discovery, or even with a search for one. The method is based on the training we have, and the teaching we do (even quantum electrodynamics!), as physicists. It is based on the networking we build by 1) providing courses of continuing education for people working in industry and 2) through our undergraduate as well as graduate students who have gone on to work in industry. In fact, if we were to be limited to two words to describe the method, they are ``former students.'' Data from local and international medical imaging manufacturing industry are presented.

  6. Projected regression method for solving Fredholm integral equations arising in the analytic continuation problem of quantum physics

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    2017-11-01

    We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.

  7. Towards building a robust computational framework to simulate multi-physics problems - a solution technique for three-phase (gas-liquid-solid) interactions

    NASA Astrophysics Data System (ADS)

    Zhang, Lucy

    In this talk, we show a robust numerical framework to model and simulate gas-liquid-solid three-phase flows. The overall algorithm adopts a non-boundary-fitted approach that avoids frequent mesh-updating procedures by defining independent meshes and explicit interfacial points to represent each phase. In this framework, we couple the immersed finite element method (IFEM) and the connectivity-free front tracking (CFFT) method that model fluid-solid and gas-liquid interactions, respectively, for the three-phase models. The CFFT is used here to simulate gas-liquid multi-fluid flows that uses explicit interfacial points to represent the gas-liquid interface and for its easy handling of interface topology changes. Instead of defining different levels simultaneously as used in level sets, an indicator function naturally couples the two methods together to represent and track each of the three phases. Several 2-D and 3-D testing cases are performed to demonstrate the robustness and capability of the coupled numerical framework in dealing with complex three-phase problems, in particular free surfaces interacting with deformable solids. The solution technique offers accuracy and stability, which provides a means to simulate various engineering applications. The author would like to acknowledge the supports from NIH/DHHS R01-2R01DC005642-10A1 and the National Natural Science Foundation of China (NSFC) 11550110185.

  8. IAEA Coordinated Research Project on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, Gerhard; Bostelmann, F.

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained).more » SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on the HTGR Uncertainty Analysis in Modelling (UAM) be implemented. This CRP is a continuation of the previous IAEA and Organization for Economic Co-operation and Development (OECD)/Nuclear Energy Agency (NEA) international activities on Verification and Validation (V&V) of available analytical capabilities for HTGR simulation for design and safety evaluations. Within the framework of these activities different numerical and experimental benchmark problems were performed and insight was gained about specific physics phenomena and the adequacy of analysis methods.« less

  9. Robustness of spatial micronetworks

    NASA Astrophysics Data System (ADS)

    McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  10. Bayesian analysis of caustic-crossing microlensing events

    NASA Astrophysics Data System (ADS)

    Cassan, A.; Horne, K.; Kains, N.; Tsapras, Y.; Browne, P.

    2010-06-01

    Aims: Caustic-crossing binary-lens microlensing events are important anomalous events because they are capable of detecting an extrasolar planet companion orbiting the lens star. Fast and robust modelling methods are thus of prime interest in helping to decide whether a planet is detected by an event. Cassan introduced a new set of parameters to model binary-lens events, which are closely related to properties of the light curve. In this work, we explain how Bayesian priors can be added to this framework, and investigate on interesting options. Methods: We develop a mathematical formulation that allows us to compute analytically the priors on the new parameters, given some previous knowledge about other physical quantities. We explicitly compute the priors for a number of interesting cases, and show how this can be implemented in a fully Bayesian, Markov chain Monte Carlo algorithm. Results: Using Bayesian priors can accelerate microlens fitting codes by reducing the time spent considering physically implausible models, and helps us to discriminate between alternative models based on the physical plausibility of their parameters.

  11. Uncertainty Quantification in Aeroelasticity

    NASA Astrophysics Data System (ADS)

    Beran, Philip; Stanford, Bret; Schrock, Christopher

    2017-01-01

    Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are introduced when physical parameters and elements of the modeling process are uncertain. By viewing aeroelasticity through a nondeterministic prism, where key quantities are assumed stochastic, one may gain insights into how to reduce system uncertainty, increase system robustness, and maintain aeroelastic safety. This article reviews uncertainty quantification in aeroelasticity using traditional analytical techniques not reliant on computational fluid dynamics; compares and contrasts this work with emerging methods based on computational fluid dynamics, which target richer physics; and reviews the state of the art in aeroelastic optimization under uncertainty. Barriers to continued progress, for example, the so-called curse of dimensionality, are discussed.

  12. Structured filtering

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Wiebe, Nathan

    2017-08-01

    A major challenge facing existing sequential Monte Carlo methods for parameter estimation in physics stems from the inability of existing approaches to robustly deal with experiments that have different mechanisms that yield the results with equivalent probability. We address this problem here by proposing a form of particle filtering that clusters the particles that comprise the sequential Monte Carlo approximation to the posterior before applying a resampler. Through a new graphical approach to thinking about such models, we are able to devise an artificial-intelligence based strategy that automatically learns the shape and number of the clusters in the support of the posterior. We demonstrate the power of our approach by applying it to randomized gap estimation and a form of low circuit-depth phase estimation where existing methods from the physics literature either exhibit much worse performance or even fail completely.

  13. Experimental Demonstration of Observability and Operability of Robustness of Coherence

    NASA Astrophysics Data System (ADS)

    Zheng, Wenqiang; Ma, Zhihao; Wang, Hengyan; Fei, Shao-Ming; Peng, Xinhua

    2018-06-01

    Quantum coherence is an invaluable physical resource for various quantum technologies. As a bona fide measure in quantifying coherence, the robustness of coherence (ROC) is not only mathematically rigorous, but also physically meaningful. We experimentally demonstrate the witness-observable and operational feature of the ROC in a multiqubit nuclear magnetic resonance system. We realize witness measurements by detecting the populations of quantum systems in one trial. The approach may also apply to physical systems compatible with ensemble or nondemolition measurements. Moreover, we experimentally show that the ROC quantifies the advantage enabled by a quantum state in a phase discrimination task.

  14. Robust design optimization using the price of robustness, robust least squares and regularization methods

    NASA Astrophysics Data System (ADS)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  15. Volumetric breast density measurement: sensitivity analysis of a relative physics approach

    PubMed Central

    Lau, Susie; Abdul Aziz, Yang Faridah

    2016-01-01

    Objective: To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. Methods: 3317 raw digital mammograms were processed with Volpara® (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Results: Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Conclusion: Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Advances in knowledge: Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be. PMID:27452264

  16. A Robust Symmetry-Based Approach to Exploit Terra-SAR-X Dual-Pol Data for Targets at Sea Observation

    NASA Astrophysics Data System (ADS)

    Velotto, D.; Nunziata, F.; Migliaccio, M.; Lehner, S.

    2013-08-01

    In this study a simple physical property, known as reflection symmetry, is exploited to differentiate the objects in marine scenes, i.e. sea surface and metallic targets. First the reflection property is verified and demonstrated against actual SAR images, by measuring the magnitude of the correlation between cross- polarized channels (i.e. HH/HV or VH/VV). Then, a sensitivity study is performed to show the potential of the proposed method in observing man-made metallic target at sea. The robustness of the proposed technique is demonstrated using coherent dual-polarimetric X- band SAR data acquired by the satellite TerraSAR-X in both cross-polarization combinations, with different incidence angle and weather conditions. Co-located ground truth information provided by Automatic Identification System (AIS) reports and harbor charts are used to locate ships, navigation aids and buoys. The proposed method outperforms standard single polarization SAR targets at sea observation independently of the radar geometry and oceanic phenomena.

  17. The origin and reduction of spurious extrahepatic counts observed in 90Y non-TOF PET imaging post radioembolization

    NASA Astrophysics Data System (ADS)

    Walrand, Stephan; Hesse, Michel; Jamar, François; Lhommel, Renaud

    2018-04-01

    Our literature survey revealed a physical effect unknown to the nuclear medicine community, i.e. internal bremsstrahlung emission, and also the existence of long energy resolution tails in crystal scintillation. None of these effects has ever been modelled in PET Monte Carlo (MC) simulations. This study investigates whether these two effects could be at the origin of two unexplained observations in 90Y imaging by PET: the increasing tails in the radial profile of true coincidences, and the presence of spurious extrahepatic counts post radioembolization in non-TOF PET and their absence in TOF PET. These spurious extrahepatic counts hamper the microsphere delivery check in liver radioembolization. An acquisition of a 32P vial was performed on a GSO PET system. This is the ideal setup to study the impact of bremsstrahlung x-rays on the true coincidence rate when no positron emission and no crystal radioactivity are present. A MC simulation of the acquisition was performed using Gate-Geant4. MC simulations of non-TOF PET and TOF-PET imaging of a synthetic 90Y human liver radioembolization phantom were also performed. Internal bremsstrahlung and long energy resolution tails inclusion in MC simulations quantitatively predict the increasing tails in the radial profile. In addition, internal bremsstrahlung explains the discrepancy previously observed in bremsstrahlung SPECT between the measure of the 90Y bremsstrahlung spectrum and its simulation with Gate-Geant4. However the spurious extrahepatic counts in non-TOF PET mainly result from the failure of conventional random correction methods in such low count rate studies and poor robustness versus emission-transmission inconsistency. A novel proposed random correction method succeeds in cleaning the spurious extrahepatic counts in non-TOF PET. Two physical effects not considered up to now in nuclear medicine were identified to be at the origin of the unusual 90Y true coincidences radial profile. TOF reconstruction removing of the spurious extrahepatic counts was theoretically explained by a better robustness against emission-transmission inconsistency. A novel random correction method was proposed to overcome the issue in non-TOF PET. Further studies are needed to assess the novel random correction method robustness.

  18. Application of physical scaling towards downscaling climate model precipitation data

    NASA Astrophysics Data System (ADS)

    Gaur, Abhishek; Simonovic, Slobodan P.

    2018-04-01

    Physical scaling (SP) method downscales climate model data to local or regional scales taking into consideration physical characteristics of the area under analysis. In this study, multiple SP method based models are tested for their effectiveness towards downscaling North American regional reanalysis (NARR) daily precipitation data. Model performance is compared with two state-of-the-art downscaling methods: statistical downscaling model (SDSM) and generalized linear modeling (GLM). The downscaled precipitation is evaluated with reference to recorded precipitation at 57 gauging stations located within the study region. The spatial and temporal robustness of the downscaling methods is evaluated using seven precipitation based indices. Results indicate that SP method-based models perform best in downscaling precipitation followed by GLM, followed by the SDSM model. Best performing models are thereafter used to downscale future precipitations made by three global circulation models (GCMs) following two emission scenarios: representative concentration pathway (RCP) 2.6 and RCP 8.5 over the twenty-first century. The downscaled future precipitation projections indicate an increase in mean and maximum precipitation intensity as well as a decrease in the total number of dry days. Further an increase in the frequency of short (1-day), moderately long (2-4 day), and long (more than 5-day) precipitation events is projected.

  19. Mixed method evaluation of a community-based physical activity program using the RE-AIM framework: practical application in a real-world setting.

    PubMed

    Koorts, Harriet; Gillison, Fiona

    2015-11-06

    Communities are a pivotal setting in which to promote increases in child and adolescent physical activity behaviours. Interventions implemented in these settings require effective evaluation to facilitate translation of findings to wider settings. The aims of this paper are to i) present findings from a RE-AIM evaluation of a community-based physical activity program, and ii) review the methodological challenges faced when applying RE-AIM in practice. A single mixed-methods case study was conducted based on a concurrent triangulation design. Five sources of data were collected via interviews, questionnaires, archival records, documentation and field notes. Evidence was triangulated within RE-AIM to assess individual and organisational-level program outcomes. Inconsistent availability of data and a lack of robust reporting challenged assessment of all five dimensions. Reach, Implementation and setting-level Adoption were less successful, Effectiveness and Maintenance at an individual and organisational level were moderately successful. Only community-level Adoption was highly successful, reflecting the key program goal to provide community-wide participation in sport and physical activity. This research highlighted important methodological constraints associated with the use of RE-AIM in practice settings. Future evaluators wishing to use RE-AIM may benefit from a mixed-method triangulation approach to offset challenges with data availability and reliability.

  20. Development of Physics-Based Hurricane Wave Response Functions: Application to Selected Sites on the U.S. Gulf Coast

    NASA Astrophysics Data System (ADS)

    McLaughlin, P. W.; Kaihatu, J. M.; Irish, J. L.; Taylor, N. R.; Slinn, D.

    2013-12-01

    Recent hurricane activity in the Gulf of Mexico has led to a need for accurate, computationally efficient prediction of hurricane damage so that communities can better assess risk of local socio-economic disruption. This study focuses on developing robust, physics based non-dimensional equations that accurately predict maximum significant wave height at different locations near a given hurricane track. These equations (denoted as Wave Response Functions, or WRFs) were developed from presumed physical dependencies between wave heights and hurricane characteristics and fit with data from numerical models of waves and surge under hurricane conditions. After curve fitting, constraints which correct for fully developed sea state were used to limit the wind wave growth. When applied to the region near Gulfport, MS, back prediction of maximum significant wave height yielded root mean square errors between 0.22-0.42 (m) at open coast stations and 0.07-0.30 (m) at bay stations when compared to the numerical model data. The WRF method was also applied to Corpus Christi, TX and Panama City, FL with similar results. Back prediction errors will be included in uncertainty evaluations connected to risk calculations using joint probability methods. These methods require thousands of simulations to quantify extreme value statistics, thus requiring the use of reduced methods such as the WRF to represent the relevant physical processes.

  1. Spatio-temporal Eigenvector Filtering: Application on Bioenergy Crop Impacts

    NASA Astrophysics Data System (ADS)

    Wang, M.; Kamarianakis, Y.; Georgescu, M.

    2017-12-01

    A suite of 10-year ensemble-based simulations was conducted to investigate the hydroclimatic impacts due to large-scale deployment of perennial bioenergy crops across the continental United States. Given the large size of the simulated dataset (about 60Tb), traditional hierarchical spatio-temporal statistical modelling cannot be implemented for the evaluation of physics parameterizations and biofuel impacts. In this work, we propose a filtering algorithm that takes into account the spatio-temporal autocorrelation structure of the data while avoiding spatial confounding. This method is used to quantify the robustness of simulated hydroclimatic impacts associated with bioenergy crops to alternative physics parameterizations and observational datasets. Results are evaluated against those obtained from three alternative Bayesian spatio-temporal specifications.

  2. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    PubMed

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  3. An Assessment of Artificial Compressibility and Pressure Projection Methods for Incompressible Flow Simulations

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, C.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    Performance of the two commonly used numerical procedures, one based on artificial compressibility method and the other pressure projection method, are compared. These formulations are selected primarily because they are designed for three-dimensional applications. The computational procedures are compared by obtaining steady state solutions of a wake vortex and unsteady solutions of a curved duct flow. For steady computations, artificial compressibility was very efficient in terms of computing time and robustness. For an unsteady flow which requires small physical time step, pressure projection method was found to be computationally more efficient than an artificial compressibility method. This comparison is intended to give some basis for selecting a method or a flow solution code for large three-dimensional applications where computing resources become a critical issue.

  4. High Assurance Control of Cyber-Physical Systems with Application to Unmanned Aircraft Systems

    NASA Astrophysics Data System (ADS)

    Kwon, Cheolhyeon

    With recent progress in the networked embedded control technology, cyber attacks have become one of the major threats to Cyber-Physical Systems (CPSs) due to their close integration of physical processes, computational resources, and communication capabilities. While CPSs have various applications in both military and civilian uses, their on-board automation and communication afford significant advantages over a system without such abilities, but these benefits come at the cost of possible vulnerability to cyber attacks. Traditionally, most cyber security studies in CPSs are mainly based on the computer security perspective, focusing on issues such as the trustworthiness of data flow, without rigorously considering the system's physical processes such as real-time dynamic behaviors. While computer security components are key elements in the hardware/software layer, these methods alone are not sufficient for diagnosing the healthiness of the CPSs' physical behavior. In seeking to address this problem, this research work proposes a control theoretic perspective approach which can accurately represent the interactions between the physical behavior and the logical behavior (computing resources) of the CPS. Then a controls domain aspect is explored extending beyond just the logical process of the CPS to include the underlying physical behavior. This approach will allow the CPS whose physical operations are robust/resilient to the damage caused by cyber attacks, successfully complementing the existing CPS security architecture. It is important to note that traditional fault-tolerant/robust control methods could not be directly applicable to achieve resiliency against malicious cyber attacks which can be designed sophisticatedly to spoof the security/safety monitoring system (note this is different from common faults). Thus, security issues at this layer require different risk management to detect cyber attacks and mitigate their impact within the context of a unified physical and logical process model of the CPS. Specifically, three main tasks are discussed in this presentation: (i) we first investigate diverse granularity of the interactions inside the CPS and propose feasible cyber attack models to characterize the compromised behavior of the CPS with various measures, from its severity to detectability; (ii) based on this risk information, our approach to securing the CPS addresses both monitoring of and high assurance control design against cyber attacks by developing on-line safety assessment and mitigation algorithms; and (iii) by extending the developed theories and methods from a single CPS to multiple CPSs, we examine the security and safety of multi-CPS network that are strongly dependent on the network topology, cooperation protocols between individual CPSs, etc. The effectiveness of the analytical findings is demonstrated and validated with illustrative examples, especially unmanned aircraft system (UAS) applications.

  5. Robust control of dielectric elastomer diaphragm actuator for human pulse signal tracking

    NASA Astrophysics Data System (ADS)

    Ye, Zhihang; Chen, Zheng; Asmatulu, Ramazan; Chan, Hoyin

    2017-08-01

    Human pulse signal tracking is an emerging technology that is needed in traditional Chinese medicine. However, soft actuation with multi-frequency tracking capability is needed for tracking human pulse signal. Dielectric elastomer (DE) is one type of soft actuating that has great potential in human pulse signal tracking. In this paper, a DE diaphragm actuator was designed and fabricated to track human pulse pressure signal. A physics-based and control-oriented model has been developed to capture the dynamic behavior of DE diaphragm actuator. Using the physical model, an H-infinity robust control was designed for the actuator to reject high-frequency sensing noises and disturbances. The robust control was then implemented in real-time to track a multi-frequency signal, which verified the tracking capability and robustness of the control system. In the human pulse signal tracking test, a human pulse signal was measured at the City University of Hong Kong and then was tracked using DE actuator at Wichita State University in the US. Experimental results have verified that the DE actuator with its robust control is capable of tracking human pulse signal.

  6. Performance evaluation of BPM system in SSRF using PCA method

    NASA Astrophysics Data System (ADS)

    Chen, Zhi-Chu; Leng, Yong-Bin; Yan, Ying-Bing; Yuan, Ren-Xian; Lai, Long-Wei

    2014-07-01

    The beam position monitor (BPM) system is of most importance in a light source. The capability of the BPM depends on the resolution of the system. The traditional standard deviation on the raw data method merely gives the upper limit of the resolution. Principal component analysis (PCA) had been introduced in the accelerator physics and it could be used to get rid of the actual signals. Beam related information was extracted before the evaluation of the BPM performance. A series of studies had been made in the Shanghai Synchrotron Radiation Facility (SSRF) and PCA was proved to be an effective and robust method in the performance evaluations of our BPM system.

  7. Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition

    PubMed Central

    2017-01-01

    Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user’s location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively. PMID:28817094

  8. Dynamical Core in Atmospheric Model Does Matter in the Simulation of Arctic Climate

    NASA Astrophysics Data System (ADS)

    Jun, Sang-Yoon; Choi, Suk-Jin; Kim, Baek-Min

    2018-03-01

    Climate models using different dynamical cores can simulate significantly different winter Arctic climates even if equipped with virtually the same physics schemes. Current climate simulated by the global climate model using cubed-sphere grid with spectral element method (SE core) exhibited significantly warmer Arctic surface air temperature compared to that using latitude-longitude grid with finite volume method core. Compared to the finite volume method core, SE core simulated additional adiabatic warming in the Arctic lower atmosphere, and this was consistent with the eddy-forced secondary circulation. Downward longwave radiation further enhanced Arctic near-surface warming with a higher surface air temperature of about 1.9 K. Furthermore, in the atmospheric response to the reduced sea ice conditions with the same physical settings, only the SE core showed a robust cooling response over North America. We emphasize that special attention is needed in selecting the dynamical core of climate models in the simulation of the Arctic climate and associated teleconnection patterns.

  9. Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition.

    PubMed

    Choi, Hyo-Rim; Kim, TaeYong

    2017-08-17

    Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user's location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively.

  10. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  11. An Overview of the Role of Systems Analysis in NASA's Hypersonics Project

    NASA Technical Reports Server (NTRS)

    Robinson, Jeffrey S.; Martin John G.; Bowles, Jeffrey V> ; Mehta, Unmeel B.; Snyder, CHristopher A.

    2006-01-01

    NASA's Aeronautics Research Mission Directorate recently restructured its Vehicle Systems Program, refocusing it towards understanding the fundamental physics that govern flight in all speed regimes. Now called the Fundamental Aeronautics Program, it is comprised of four new projects, Subsonic Fixed Wing, Subsonic Rotary Wing, Supersonics, and Hypersonics. The Aeronautics Research Mission Directorate has charged the Hypersonics Project with having a basic understanding of all systems that travel at hypersonic speeds within the Earth's and other planets atmospheres. This includes both powered and unpowered systems, such as re-entry vehicles and vehicles powered by rocket or airbreathing propulsion that cruise in and accelerate through the atmosphere. The primary objective of the Hypersonics Project is to develop physics-based predictive tools that enable the design, analysis and optimization of such systems. The Hypersonics Project charges the systems analysis discipline team with providing it the decision-making information it needs to properly guide research and technology development. Credible, rapid, and robust multi-disciplinary system analysis processes and design tools are required in order to generate this information. To this end, the principal challenges for the systems analysis team are the introduction of high fidelity physics into the analysis process and integration into a design environment, quantification of design uncertainty through the use of probabilistic methods, reduction in design cycle time, and the development and implementation of robust processes and tools enabling a wide design space and associated technology assessment capability. This paper will discuss the roles and responsibilities of the systems analysis discipline team within the Hypersonics Project as well as the tools, methods, processes, and approach that the team will undertake in order to perform its project designated functions.

  12. Catchments as non-linear filters: evaluating data-driven approaches for spatio-temporal predictions in ungauged basins

    NASA Astrophysics Data System (ADS)

    Bellugi, D. G.; Tennant, C.; Larsen, L.

    2016-12-01

    Catchment and climate heterogeneity complicate prediction of runoff across time and space, and resulting parameter uncertainty can lead to large accumulated errors in hydrologic models, particularly in ungauged basins. Recently, data-driven modeling approaches have been shown to avoid the accumulated uncertainty associated with many physically-based models, providing an appealing alternative for hydrologic prediction. However, the effectiveness of different methods in hydrologically and geomorphically distinct catchments, and the robustness of these methods to changing climate and changing hydrologic processes remain to be tested. Here, we evaluate the use of machine learning techniques to predict daily runoff across time and space using only essential climatic forcing (e.g. precipitation, temperature, and potential evapotranspiration) time series as model input. Model training and testing was done using a high quality dataset of daily runoff and climate forcing data for 25+ years for 600+ minimally-disturbed catchments (drainage area range 5-25,000 km2, median size 336 km2) that cover a wide range of climatic and physical characteristics. Preliminary results using Support Vector Regression (SVR) suggest that in some catchments this nonlinear-based regression technique can accurately predict daily runoff, while the same approach fails in other catchments, indicating that the representation of climate inputs and/or catchment filter characteristics in the model structure need further refinement to increase performance. We bolster this analysis by using Sparse Identification of Nonlinear Dynamics (a sparse symbolic regression technique) to uncover the governing equations that describe runoff processes in catchments where SVR performed well and for ones where it performed poorly, thereby enabling inference about governing processes. This provides a robust means of examining how catchment complexity influences runoff prediction skill, and represents a contribution towards the integration of data-driven inference and physically-based models.

  13. A comparative study of multivariable robustness analysis methods as applied to integrated flight and propulsion control

    NASA Technical Reports Server (NTRS)

    Schierman, John D.; Lovell, T. A.; Schmidt, David K.

    1993-01-01

    Three multivariable robustness analysis methods are compared and contrasted. The focus of the analysis is on system stability and performance robustness to uncertainty in the coupling dynamics between two interacting subsystems. Of particular interest is interacting airframe and engine subsystems, and an example airframe/engine vehicle configuration is utilized in the demonstration of these approaches. The singular value (SV) and structured singular value (SSV) analysis methods are compared to a method especially well suited for analysis of robustness to uncertainties in subsystem interactions. This approach is referred to here as the interacting subsystem (IS) analysis method. This method has been used previously to analyze airframe/engine systems, emphasizing the study of stability robustness. However, performance robustness is also investigated here, and a new measure of allowable uncertainty for acceptable performance robustness is introduced. The IS methodology does not require plant uncertainty models to measure the robustness of the system, and is shown to yield valuable information regarding the effects of subsystem interactions. In contrast, the SV and SSV methods allow for the evaluation of the robustness of the system to particular models of uncertainty, and do not directly indicate how the airframe (engine) subsystem interacts with the engine (airframe) subsystem.

  14. TH-E-201-01: Diagnostic Radiology Residents Physics Curriculum and Updates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sensakovic, W.

    The ABR Core Examination stresses integrating physics into real-world clinical practice and, accordingly, has shifted its focus from passive recall of facts to active application of physics principles. Physics education of radiology residents poses a challenge. The traditional method of didactic lectures alone is insufficient, yet it is difficult to incorporate physics teaching consistently into clinical rotations due to time constraints. Faced with this challenge, diagnostic medical physicists who teach radiology residents, have been thinking about how to adapt their teaching to the new paradigm, what to teach and meet expectation of the radiology resident and the radiology residency program.more » The proposed lecture attempts to discuss above questions. Newly developed diagnostic radiology residents physics curriculum by the AAPM Imaging Physics Curricula Subcommittee will be reviewed. Initial experience on hands-on physics teaching will be discussed. Radiology resident who will have taken the BAR Core Examination will share the expectation of physics teaching from a resident perspective. The lecture will help develop robust educational approaches to prepare radiology residents for safer and more effective lifelong practice. Learning Objectives: Learn updated physics requirements for radiology residents Pursue effective approaches to teach physics to radiology residents Learn expectation of physics teaching from resident perspective J. Zhang, This topic is partially supported by RSNA Education Scholar Grant.« less

  15. APOLLO_NG - a probabilistic interpretation of the APOLLO legacy for AVHRR heritage channels

    NASA Astrophysics Data System (ADS)

    Klüser, L.; Killius, N.; Gesell, G.

    2015-04-01

    The cloud processing scheme APOLLO (Avhrr Processing scheme Over cLouds, Land and Ocean) has been in use for cloud detection and cloud property retrieval since the late 1980s. The physics of the APOLLO scheme still build the backbone of a range of cloud detection algorithms for AVHRR (Advanced Very High Resolution Radiometer) heritage instruments. The APOLLO_NG (APOLLO_NextGeneration) cloud processing scheme is a probabilistic interpretation of the original APOLLO method. While building upon the physical principles having served well in the original APOLLO a couple of additional variables have been introduced in APOLLO_NG. Cloud detection is not performed as a binary yes/no decision based on these physical principals but is expressed as cloud probability for each satellite pixel. Consequently the outcome of the algorithm can be tuned from clear confident to cloud confident depending on the purpose. The probabilistic approach allows to retrieving not only the cloud properties (optical depth, effective radius, cloud top temperature and cloud water path) but also their uncertainties. APOLLO_NG is designed as a standalone cloud retrieval method robust enough for operational near-realtime use and for the application with large amounts of historical satellite data. Thus the radiative transfer solution is approximated by the same two stream approach which also had been used for the original APOLLO. This allows the algorithm to be robust enough for being applied to a wide range of sensors without the necessity of sensor-specific tuning. Moreover it allows for online calculation of the radiative transfer (i.e. within the retrieval algorithm) giving rise to a detailed probabilistic treatment of cloud variables. This study presents the algorithm for cloud detection and cloud property retrieval together with the physical principles from the APOLLO legacy it is based on. Furthermore a couple of example results from on NOAA-18 are presented.

  16. Multi-scale and multi-physics simulations using the multi-fluid plasma model

    DTIC Science & Technology

    2017-04-25

    small The simulation uses 512 second-order elements Bz = 1.0, Te = Ti = 0.01, ui = ue = 0 ne = ni = 1.0 + e−10(x−6) 2 Baboolal, Math . and Comp. Sim. 55...DISTRIBUTION Clearance No. 17211 23 / 31 SUMMARY The blended finite element method (BFEM) is presented DG spatial discretization with explicit Runge...Kutta (i+, n) CG spatial discretization with implicit Crank-Nicolson (e−, fileds) DG captures shocks and discontinuities CG is efficient and robust for

  17. Effects of heat conduction on artificial viscosity methods for shock capturing

    DOE PAGES

    Cook, Andrew W.

    2013-12-01

    Here we investigate the efficacy of artificial thermal conductivity for shock capturing. The conductivity model is derived from artificial bulk and shear viscosities, such that stagnation enthalpy remains constant across shocks. By thus fixing the Prandtl number, more physical shock profiles are obtained, only on a larger scale. The conductivity model does not contain any empirical constants. It increases the net dissipation of a computational algorithm but is found to better preserve symmetry and produce more robust solutions for strong-shock problems.

  18. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  19. Learning Multiple Band-Pass Filters for Sleep Stage Estimation: Towards Care Support for Aged Persons

    NASA Astrophysics Data System (ADS)

    Takadama, Keiki; Hirose, Kazuyuki; Matsushima, Hiroyasu; Hattori, Kiyohiko; Nakajima, Nobuo

    This paper proposes the sleep stage estimation method that can provide an accurate estimation for each person without connecting any devices to human's body. In particular, our method learns the appropriate multiple band-pass filters to extract the specific wave pattern of heartbeat, which is required to estimate the sleep stage. For an accurate estimation, this paper employs Learning Classifier System (LCS) as the data-mining techniques and extends it to estimate the sleep stage. Extensive experiments on five subjects in mixed health confirm the following implications: (1) the proposed method can provide more accurate sleep stage estimation than the conventional method, and (2) the sleep stage estimation calculated by the proposed method is robust regardless of the physical condition of the subject.

  20. Robust optimization in lung treatment plans accounting for geometric uncertainty.

    PubMed

    Zhang, Xin; Rong, Yi; Morrill, Steven; Fang, Jian; Narayanasamy, Ganesh; Galhardo, Edvaldo; Maraboyina, Sanjay; Croft, Christopher; Xia, Fen; Penagaricano, Jose

    2018-05-01

    Robust optimization generates scenario-based plans by a minimax optimization method to find optimal scenario for the trade-off between target coverage robustness and organ-at-risk (OAR) sparing. In this study, 20 lung cancer patients with tumors located at various anatomical regions within the lungs were selected and robust optimization photon treatment plans including intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) plans were generated. The plan robustness was analyzed using perturbed doses with setup error boundary of ±3 mm in anterior/posterior (AP), ±3 mm in left/right (LR), and ±5 mm in inferior/superior (IS) directions from isocenter. Perturbed doses for D 99 , D 98 , and D 95 were computed from six shifted isocenter plans to evaluate plan robustness. Dosimetric study was performed to compare the internal target volume-based robust optimization plans (ITV-IMRT and ITV-VMAT) and conventional PTV margin-based plans (PTV-IMRT and PTV-VMAT). The dosimetric comparison parameters were: ITV target mean dose (D mean ), R 95 (D 95 /D prescription ), Paddick's conformity index (CI), homogeneity index (HI), monitor unit (MU), and OAR doses including lung (D mean , V 20 Gy and V 15 Gy ), chest wall, heart, esophagus, and maximum cord doses. A comparison of optimization results showed the robust optimization plan had better ITV dose coverage, better CI, worse HI, and lower OAR doses than conventional PTV margin-based plans. Plan robustness evaluation showed that the perturbed doses of D 99 , D 98 , and D 95 were all satisfied at least 99% of the ITV to received 95% of prescription doses. It was also observed that PTV margin-based plans had higher MU than robust optimization plans. The results also showed robust optimization can generate plans that offer increased OAR sparing, especially for normal lungs and OARs near or abutting the target. Weak correlation was found between normal lung dose and target size, and no other correlation was observed in this study. © 2018 University of Arkansas for Medical Sciences. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  1. A Robust Cooperated Control Method with Reinforcement Learning and Adaptive H∞ Control

    NASA Astrophysics Data System (ADS)

    Obayashi, Masanao; Uchiyama, Shogo; Kuremoto, Takashi; Kobayashi, Kunikazu

    This study proposes a robust cooperated control method combining reinforcement learning with robust control to control the system. A remarkable characteristic of the reinforcement learning is that it doesn't require model formula, however, it doesn't guarantee the stability of the system. On the other hand, robust control system guarantees stability and robustness, however, it requires model formula. We employ both the actor-critic method which is a kind of reinforcement learning with minimal amount of computation to control continuous valued actions and the traditional robust control, that is, H∞ control. The proposed system was compared method with the conventional control method, that is, the actor-critic only used, through the computer simulation of controlling the angle and the position of a crane system, and the simulation result showed the effectiveness of the proposed method.

  2. Solving a Higgs optimization problem with quantum annealing for machine learning.

    PubMed

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-18

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  3. Solving a Higgs optimization problem with quantum annealing for machine learning

    NASA Astrophysics Data System (ADS)

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-01

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  4. Calibration of the LHAASO-KM2A electromagnetic particle detectors using charged particles within the extensive air showers

    NASA Astrophysics Data System (ADS)

    Lv, Hongkui; He, Huihai; Sheng, Xiangdong; Liu, Jia; Chen, Songzhan; Liu, Ye; Hou, Chao; Zhao, Jing; Zhang, Zhongquan; Wu, Sha; Wang, Yaping; Lhaaso Collaboration

    2018-07-01

    In the Large High Altitude Air Shower Observatory (LHAASO), one square kilometer array (KM2A), with 5242 electromagnetic particle detectors (EDs) and 1171 muon detectors (MDs), is designed to study ultra-high energy gamma-ray astronomy and cosmic ray physics. The remoteness and numerous detectors extremely demand a robust and automatic calibration procedure. In this paper, a self-calibration method which relies on the measurement of charged particles within the extensive air showers is proposed. The method is fully validated by Monte Carlo simulation and successfully applied in a KM2A prototype array experiment. Experimental results show that the self-calibration method can be used to determine the detector time offset constants at the sub-nanosecond level and the number density of particles collected by each ED with an accuracy of a few percents, which are adequate to meet the physical requirements of LHAASO experiment. This software calibration also offers an ideal method to realtime monitor the detector performances for next generation ground-based EAS experiments covering an area above square kilometers scale.

  5. TH-E-201-00: Teaching Radiology Residents: What, How, and Expectation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The ABR Core Examination stresses integrating physics into real-world clinical practice and, accordingly, has shifted its focus from passive recall of facts to active application of physics principles. Physics education of radiology residents poses a challenge. The traditional method of didactic lectures alone is insufficient, yet it is difficult to incorporate physics teaching consistently into clinical rotations due to time constraints. Faced with this challenge, diagnostic medical physicists who teach radiology residents, have been thinking about how to adapt their teaching to the new paradigm, what to teach and meet expectation of the radiology resident and the radiology residency program.more » The proposed lecture attempts to discuss above questions. Newly developed diagnostic radiology residents physics curriculum by the AAPM Imaging Physics Curricula Subcommittee will be reviewed. Initial experience on hands-on physics teaching will be discussed. Radiology resident who will have taken the BAR Core Examination will share the expectation of physics teaching from a resident perspective. The lecture will help develop robust educational approaches to prepare radiology residents for safer and more effective lifelong practice. Learning Objectives: Learn updated physics requirements for radiology residents Pursue effective approaches to teach physics to radiology residents Learn expectation of physics teaching from resident perspective J. Zhang, This topic is partially supported by RSNA Education Scholar Grant.« less

  6. More About Robustness of Coherence

    NASA Astrophysics Data System (ADS)

    Li, Pi-Yu; Liu, Feng; Xu, Yan-Qin; La, Dong-Sheng

    2018-07-01

    Quantum coherence is an important physical resource in quantum computation and quantum information processing. In this paper, the distribution of the robustness of coherence in multipartite quantum system is considered. It is shown that the additivity of the robustness of coherence is not always valid for general quantum state, but the robustness of coherence is decreasing under partial trace for any bipartite quantum system. The ordering states with the coherence measures RoC, the l 1 norm of coherence C_{l1} and the relative entropy of coherence C r are also discussed.

  7. Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Rogers, Adam; Safi-Harb, Samar; Fiege, Jason

    2015-08-01

    The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.

  8. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, Brian; Scherzinger, William

    2017-01-19

    Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, andmore » compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less

  9. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, Brian T.; Scherzinger, William M.

    2017-01-19

    A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and comparedmore » to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less

  10. The atmosphere, the p-factor and the bright visible circumstellar environment of the prototype of classical Cepheids δ Cep

    NASA Astrophysics Data System (ADS)

    Nardetto, Nicolas; Poretti, Ennio; Mérand, Antoine; Anderson, Richard I.; Fokin, Andrei; Fouqué, Pascal; Gallenne, Alexandre; Gieren, Wolfgang; Graczyk, Dariusz; Kervella, Pierre; Mathias, Philippe; Mourard, Denis; Neilson, Hilding; Pietrzynski, Grzegorz; Pilecki, Bogumil; Rainer, Monica; Storm, Jesper

    2017-09-01

    Even ≃ 16000 cycles after its discovery by John Goodricke in 1783, δ Cep, the prototype of classical Cepheids, is still studied intensively in order to better understand its atmospheric dynamical structure and its environment. Using HARPS-N spectroscopic measurements, we have measured the atmospheric velocity gradient of δ Cep for the first time and we confirm the decomposition of the projection factor, a subtle physical quantity limiting the Baade-Wesselink (BW) method of distance determination. This decomposition clarifies the physics behind the projection factor and will be useful to interpret the hundreds of p-factors that will come out from the next Gaia release. Besides, VEGA/CHARA interferometric observations of the star revealed a bright visible circumstellar environment contributing to about 7% to the total flux. Better understanding the physics of the pulsation and the environment of Cepheids is necessary to improve the BW method of distance determination, a robust tool to reach Cepheids in the MilkyWay, and beyond, in the Local Group.

  11. A meshless method for solving two-dimensional variable-order time fractional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Tayebi, A.; Shekari, Y.; Heydari, M. H.

    2017-07-01

    Several physical phenomena such as transformation of pollutants, energy, particles and many others can be described by the well-known convection-diffusion equation which is a combination of the diffusion and advection equations. In this paper, this equation is generalized with the concept of variable-order fractional derivatives. The generalized equation is called variable-order time fractional advection-diffusion equation (V-OTFA-DE). An accurate and robust meshless method based on the moving least squares (MLS) approximation and the finite difference scheme is proposed for its numerical solution on two-dimensional (2-D) arbitrary domains. In the time domain, the finite difference technique with a θ-weighted scheme and in the space domain, the MLS approximation are employed to obtain appropriate semi-discrete solutions. Since the newly developed method is a meshless approach, it does not require any background mesh structure to obtain semi-discrete solutions of the problem under consideration, and the numerical solutions are constructed entirely based on a set of scattered nodes. The proposed method is validated in solving three different examples including two benchmark problems and an applied problem of pollutant distribution in the atmosphere. In all such cases, the obtained results show that the proposed method is very accurate and robust. Moreover, a remarkable property so-called positive scheme for the proposed method is observed in solving concentration transport phenomena.

  12. Performance of the air2stream model that relates air and stream water temperatures depends on the calibration method

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jaroslaw J.

    2018-06-01

    A number of physical or data-driven models have been proposed to evaluate stream water temperatures based on hydrological and meteorological observations. However, physical models require a large amount of information that is frequently unavailable, while data-based models ignore the physical processes. Recently the air2stream model has been proposed as an intermediate alternative that is based on physical heat budget processes, but it is so simplified that the model may be applied like data-driven ones. However, the price for simplicity is the need to calibrate eight parameters that, although have some physical meaning, cannot be measured or evaluated a priori. As a result, applicability and performance of the air2stream model for a particular stream relies on the efficiency of the calibration method. The original air2stream model uses an inefficient 20-year old approach called Particle Swarm Optimization with inertia weight. This study aims at finding an effective and robust calibration method for the air2stream model. Twelve different optimization algorithms are examined on six different streams from northern USA (states of Washington, Oregon and New York), Poland and Switzerland, located in both high mountains, hilly and lowland areas. It is found that the performance of the air2stream model depends significantly on the calibration method. Two algorithms lead to the best results for each considered stream. The air2stream model, calibrated with the chosen optimization methods, performs favorably against classical streamwater temperature models. The MATLAB code of the air2stream model and the chosen calibration procedure (CoBiDE) are available as Supplementary Material on the Journal of Hydrology web page.

  13. Physically consistent data assimilation method based on feedback control for patient-specific blood flow analysis.

    PubMed

    Ii, Satoshi; Adib, Mohd Azrul Hisham Mohd; Watanabe, Yoshiyuki; Wada, Shigeo

    2018-01-01

    This paper presents a novel data assimilation method for patient-specific blood flow analysis based on feedback control theory called the physically consistent feedback control-based data assimilation (PFC-DA) method. In the PFC-DA method, the signal, which is the residual error term of the velocity when comparing the numerical and reference measurement data, is cast as a source term in a Poisson equation for the scalar potential field that induces flow in a closed system. The pressure values at the inlet and outlet boundaries are recursively calculated by this scalar potential field. Hence, the flow field is physically consistent because it is driven by the calculated inlet and outlet pressures, without any artificial body forces. As compared with existing variational approaches, although this PFC-DA method does not guarantee the optimal solution, only one additional Poisson equation for the scalar potential field is required, providing a remarkable improvement for such a small additional computational cost at every iteration. Through numerical examples for 2D and 3D exact flow fields, with both noise-free and noisy reference data as well as a blood flow analysis on a cerebral aneurysm using actual patient data, the robustness and accuracy of this approach is shown. Moreover, the feasibility of a patient-specific practical blood flow analysis is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Agreement between two cutoff points for physical activity and associated factors in young individuals☆

    PubMed Central

    Coledam, Diogo Henrique Constantino; Ferraiol, Philippe Fanelli; Pires, Raymundo; Ribeiro, Edinéia Aparecida Gomes; Ferreira, Marco Antonio Cabral; de Oliveira, Arli Ramos

    2014-01-01

    Objective: To analyze the agreement between two cutoff points for physical activity (300 and 420 minutes/week) and associated factors in youth. Methods: The study enrolled 738 adolescents of Londrina city, Paraná, Southern Brazil. The following variables were collected by a self report questionnaire: presence of moderate to vigorous physical activity, gender, age, father and mother education level, with whom the adolescent lives, number of siblings, physical activity perception, participation in Physical Education classes, facilities available to physical activity practice and sedentary behavior. Prevalence of physical activity between criterions were compared using McNemar test and the agreement was analysed by Kappa index. Multivariate analysis was performed using Poisson regression with robust variance adjustment was applied. Results: The prevalence for physical activity was significantly different: 22,3% for 300 minutes/week and 12,8% for 420 minutes/week (p<0,05), but the agreement was strong (k=0,82, p<0,001). The variables gender, father education, physical activity perception and sedentary behavior were associated to physical activity in both analyzed criteria. Participation in Physical Education class and facilities available to physical activity practice were associated to physical activity only with 300 minutes/week cutoff point. Conclusion: Caution is suggested regarding cutoffs use for physical activity in epidemiological studies, considering they can result in differences in prevalence of physical activity and its associated factors. PMID:25479852

  15. Implementation of the Jacobian-free Newton-Krylov method for solving the for solving the first-order ice sheet momentum balance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salinger, Andy; Evans, Katherine J; Lemieux, Jean-Francois

    2011-01-01

    We have implemented the Jacobian-free Newton-Krylov (JFNK) method for solving the rst-order ice sheet momentum equation in order to improve the numerical performance of the Community Ice Sheet Model (CISM), the land ice component of the Community Earth System Model (CESM). Our JFNK implementation is based on signicant re-use of existing code. For example, our physics-based preconditioner uses the original Picard linear solver in CISM. For several test cases spanning a range of geometries and boundary conditions, our JFNK implementation is 1.84-3.62 times more efficient than the standard Picard solver in CISM. Importantly, this computational gain of JFNK over themore » Picard solver increases when rening the grid. Global convergence of the JFNK solver has been signicantly improved by rescaling the equation for the basal boundary condition and through the use of an inexact Newton method. While a diverse set of test cases show that our JFNK implementation is usually robust, for some problems it may fail to converge with increasing resolution (as does the Picard solver). Globalization through parameter continuation did not remedy this problem and future work to improve robustness will explore a combination of Picard and JFNK and the use of homotopy methods.« less

  16. Phylo_dCor: distance correlation as a novel metric for phylogenetic profiling.

    PubMed

    Sferra, Gabriella; Fratini, Federica; Ponzi, Marta; Pizzi, Elisabetta

    2017-09-05

    Elaboration of powerful methods to predict functional and/or physical protein-protein interactions from genome sequence is one of the main tasks in the post-genomic era. Phylogenetic profiling allows the prediction of protein-protein interactions at a whole genome level in both Prokaryotes and Eukaryotes. For this reason it is considered one of the most promising methods. Here, we propose an improvement of phylogenetic profiling that enables handling of large genomic datasets and infer global protein-protein interactions. This method uses the distance correlation as a new measure of phylogenetic profile similarity. We constructed robust reference sets and developed Phylo-dCor, a parallelized version of the algorithm for calculating the distance correlation that makes it applicable to large genomic data. Using Saccharomyces cerevisiae and Escherichia coli genome datasets, we showed that Phylo-dCor outperforms phylogenetic profiling methods previously described based on the mutual information and Pearson's correlation as measures of profile similarity. In this work, we constructed and assessed robust reference sets and propose the distance correlation as a measure for comparing phylogenetic profiles. To make it applicable to large genomic data, we developed Phylo-dCor, a parallelized version of the algorithm for calculating the distance correlation. Two R scripts that can be run on a wide range of machines are available upon request.

  17. A statistically robust EEG re-referencing procedure to mitigate reference effect

    PubMed Central

    Lepage, Kyle Q.; Kramer, Mark A.; Chu, Catherine J.

    2014-01-01

    Background The electroencephalogram (EEG) remains the primary tool for diagnosis of abnormal brain activity in clinical neurology and for in vivo recordings of human neurophysiology in neuroscience research. In EEG data acquisition, voltage is measured at positions on the scalp with respect to a reference electrode. When this reference electrode responds to electrical activity or artifact all electrodes are affected. Successful analysis of EEG data often involves re-referencing procedures that modify the recorded traces and seek to minimize the impact of reference electrode activity upon functions of the original EEG recordings. New method We provide a novel, statistically robust procedure that adapts a robust maximum-likelihood type estimator to the problem of reference estimation, reduces the influence of neural activity from the re-referencing operation, and maintains good performance in a wide variety of empirical scenarios. Results The performance of the proposed and existing re-referencing procedures are validated in simulation and with examples of EEG recordings. To facilitate this comparison, channel-to-channel correlations are investigated theoretically and in simulation. Comparison with existing methods The proposed procedure avoids using data contaminated by neural signal and remains unbiased in recording scenarios where physical references, the common average reference (CAR) and the reference estimation standardization technique (REST) are not optimal. Conclusion The proposed procedure is simple, fast, and avoids the potential for substantial bias when analyzing low-density EEG data. PMID:24975291

  18. Tissue cell assisted fabrication of tubular catalytic platinum microengines

    NASA Astrophysics Data System (ADS)

    Wang, Hong; Moo, James Guo Sheng; Pumera, Martin

    2014-09-01

    We report a facile platform for mass production of robust self-propelled tubular microengines. Tissue cells extracted from fruits of banana and apple, Musa acuminata and Malus domestica, are used as the support on which a thin platinum film is deposited by means of physical vapor deposition. Upon sonication of the cells/Pt-coated substrate in water, microscrolls of highly uniform sizes are spontaneously formed. Tubular microengines fabricated with the fruit cell assisted method exhibit a fast motion of ~100 bodylengths per s (~1 mm s-1). An extremely simple and affordable platform for mass production of the micromotors is crucial for the envisioned swarms of thousands and millions of autonomous micromotors performing biomedical and environmental remediation tasks.We report a facile platform for mass production of robust self-propelled tubular microengines. Tissue cells extracted from fruits of banana and apple, Musa acuminata and Malus domestica, are used as the support on which a thin platinum film is deposited by means of physical vapor deposition. Upon sonication of the cells/Pt-coated substrate in water, microscrolls of highly uniform sizes are spontaneously formed. Tubular microengines fabricated with the fruit cell assisted method exhibit a fast motion of ~100 bodylengths per s (~1 mm s-1). An extremely simple and affordable platform for mass production of the micromotors is crucial for the envisioned swarms of thousands and millions of autonomous micromotors performing biomedical and environmental remediation tasks. Electronic supplementary information (ESI) available: Related video. See DOI: 10.1039/c4nr03720k

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, W.; Zhu, W. D.; Smith, S. A.

    While structural damage detection based on flexural vibration shapes, such as mode shapes and steady-state response shapes under harmonic excitation, has been well developed, little attention is paid to that based on longitudinal vibration shapes that also contain damage information. This study originally formulates a slope vibration shape for damage detection in bars using longitudinal vibration shapes. To enhance noise robustness of the method, a slope vibration shape is transformed to a multiscale slope vibration shape in a multiscale domain using wavelet transform, which has explicit physical implication, high damage sensitivity, and noise robustness. These advantages are demonstrated in numericalmore » cases of damaged bars, and results show that multiscale slope vibration shapes can be used for identifying and locating damage in a noisy environment. A three-dimensional (3D) scanning laser vibrometer is used to measure the longitudinal steady-state response shape of an aluminum bar with damage due to reduced cross-sectional dimensions under harmonic excitation, and results show that the method can successfully identify and locate the damage. Slopes of longitudinal vibration shapes are shown to be suitable for damage detection in bars and have potential for applications in noisy environments.« less

  20. Education Research in Physical Therapy: Visions of the Possible.

    PubMed

    Jensen, Gail M; Nordstrom, Terrence; Segal, Richard L; McCallum, Christine; Graham, Cecilia; Greenfield, Bruce

    2016-12-01

    Education research has been labeled the "hardest science" of all, given the challenges of teaching and learning in an environment encompassing a mixture of social interactions, events, and problems coupled with a persistent belief that education depends more on common sense than on disciplined knowledge and skill. The American Educational Research Association specifies that education research-as a scientific field of study-examines teaching and learning processes that shape educational outcomes across settings and that a learning process takes place throughout a person's life. The complexity of learning and learning environments requires not only a diverse array of research methods but also a community of education researchers committed to exploring critical questions in the education of physical therapists. Although basic science research and clinical research in physical therapy have continued to expand through growth in the numbers of funded physical therapist researchers, the profession still lacks a robust and vibrant community of education researchers. In this perspective article, the American Council of Academic Physical Therapy Task Force on Education Research proposes a compelling rationale for building a much-needed foundation for education research in physical therapy, including a set of recommendations for immediate action. © 2016 American Physical Therapy Association.

  1. [Occupation-, transportation- and leisure-related physical activity: gender inequalities in Santander, Colombia].

    PubMed

    Hormiga-Sánchez, Claudia M; Alzate-Posada, Martha L; Borrell, Carme; Palència, Laia; Rodríguez-Villamizar, Laura A; Otero-Wandurraga, Johanna A

    2016-04-01

    Objectives To estimate the prevalence of occupation-, transportation- and leisure-related physical activity, its compliance with recommendations, and to explore its association with demographic and socioeconomic variables in men and women of the Department of Santander (Colombia). Methods The sample consisted of 2421 people between 15 and 64 years of age, participants in the Risk Factors for Chronic Diseases of Santander cross-sectional study, developed in 2010. The Global Physical Activity Questionnaire was used for data collection. Age-adjusted prevalence ratios were calculated and multivariate analysis models were built by sex using robust Poisson regression. Results The prevalence of occupational and leisure physical activity and compliance with recommendations were lower in women. Sexual division of labor and a low socioeconomic level negatively influenced physical activity in women, limiting the possibility of practice of those principally engaged in unpaid work at home. Young or single men and those living in higher socioeconomic areas were more likely to practice physical activity in leisure time and meet recommendations. Conclusion Physical activity surveillance and related public policies should take into account the inequalities between the practice of men and women related to their socioeconomic conditions and the sexual division of labor.

  2. Combined Monte Carlo/torsion-angle molecular dynamics for ensemble modeling of proteins, nucleic acids and carbohydrates.

    PubMed

    Zhang, Weihong; Howell, Steven C; Wright, David W; Heindel, Andrew; Qiu, Xiangyun; Chen, Jianhan; Curtis, Joseph E

    2017-05-01

    We describe a general method to use Monte Carlo simulation followed by torsion-angle molecular dynamics simulations to create ensembles of structures to model a wide variety of soft-matter biological systems. Our particular emphasis is focused on modeling low-resolution small-angle scattering and reflectivity structural data. We provide examples of this method applied to HIV-1 Gag protein and derived fragment proteins, TraI protein, linear B-DNA, a nucleosome core particle, and a glycosylated monoclonal antibody. This procedure will enable a large community of researchers to model low-resolution experimental data with greater accuracy by using robust physics based simulation and sampling methods which are a significant improvement over traditional methods used to interpret such data. Published by Elsevier Inc.

  3. Robust surface reconstruction by design-guided SEM photometric stereo

    NASA Astrophysics Data System (ADS)

    Miyamoto, Atsushi; Matsuse, Hiroki; Koutaki, Gou

    2017-04-01

    We present a novel approach that addresses the blind reconstruction problem in scanning electron microscope (SEM) photometric stereo for complicated semiconductor patterns to be measured. In our previous work, we developed a bootstrapping de-shadowing and self-calibration (BDS) method, which automatically calibrates the parameter of the gradient measurement formulas and resolves shadowing errors for estimating an accurate three-dimensional (3D) shape and underlying shadowless images. Experimental results on 3D surface reconstruction demonstrated the significance of the BDS method for simple shapes, such as an isolated line pattern. However, we found that complicated shapes, such as line-and-space (L&S) and multilayered patterns, produce deformed and inaccurate measurement results. This problem is due to brightness fluctuations in the SEM images, which are mainly caused by the energy fluctuations of the primary electron beam, variations in the electronic expanse inside a specimen, and electrical charging of specimens. Despite these being essential difficulties encountered in SEM photometric stereo, it is difficult to model accurately all the complicated physical phenomena of electronic behavior. We improved the robustness of the surface reconstruction in order to deal with these practical difficulties with complicated shapes. Here, design data are useful clues as to the pattern layout and layer information of integrated semiconductors. We used the design data as a guide of the measured shape and incorporated a geometrical constraint term to evaluate the difference between the measured and designed shapes into the objective function of the BDS method. Because the true shape does not necessarily correspond to the designed one, we use an iterative scheme to develop proper guide patterns and a 3D surface that provides both a less distorted and more accurate 3D shape after convergence. Extensive experiments on real image data demonstrate the robustness and effectiveness of our method.

  4. Monolithic multigrid method for the coupled Stokes flow and deformable porous medium system

    NASA Astrophysics Data System (ADS)

    Luo, P.; Rodrigo, C.; Gaspar, F. J.; Oosterlee, C. W.

    2018-01-01

    The interaction between fluid flow and a deformable porous medium is a complicated multi-physics problem, which can be described by a coupled model based on the Stokes and poroelastic equations. A monolithic multigrid method together with either a coupled Vanka smoother or a decoupled Uzawa smoother is employed as an efficient numerical technique for the linear discrete system obtained by finite volumes on staggered grids. A specialty in our modeling approach is that at the interface of the fluid and poroelastic medium, two unknowns from the different subsystems are defined at the same grid point. We propose a special discretization at and near the points on the interface, which combines the approximation of the governing equations and the considered interface conditions. In the decoupled Uzawa smoother, Local Fourier Analysis (LFA) helps us to select optimal values of the relaxation parameter appearing. To implement the monolithic multigrid method, grid partitioning is used to deal with the interface updates when communication is required between two subdomains. Numerical experiments show that the proposed numerical method has an excellent convergence rate. The efficiency and robustness of the method are confirmed in numerical experiments with typically small realistic values of the physical coefficients.

  5. 24th Annual National Test and Evaluation Conference

    DTIC Science & Technology

    2008-02-28

    LSL USL μ2 μ1 μ2 LSL USL μ1 Robust Design Page 38©2008 Air Academy Associates, LLC. Do Not Reproduce. Simplify, Perfect, Innovate Why Robust Design? x...Vehicle performance Simulated Terrain Physics Soil strength Vegetation density Longitudinal force Lateral force Traction Resistance Local vehicle

  6. Rock physics model-based prediction of shear wave velocity in the Barnett Shale formation

    NASA Astrophysics Data System (ADS)

    Guo, Zhiqi; Li, Xiang-Yang

    2015-06-01

    Predicting S-wave velocity is important for reservoir characterization and fluid identification in unconventional resources. A rock physics model-based method is developed for estimating pore aspect ratio and predicting shear wave velocity Vs from the information of P-wave velocity, porosity and mineralogy in a borehole. Statistical distribution of pore geometry is considered in the rock physics models. In the application to the Barnett formation, we compare the high frequency self-consistent approximation (SCA) method that corresponds to isolated pore spaces, and the low frequency SCA-Gassmann method that describes well-connected pore spaces. Inversion results indicate that compared to the surroundings, the Barnett Shale shows less fluctuation in the pore aspect ratio in spite of complex constituents in the shale. The high frequency method provides a more robust and accurate prediction of Vs for all the three intervals in the Barnett formation, while the low frequency method collapses for the Barnett Shale interval. Possible causes for this discrepancy can be explained by the fact that poor in situ pore connectivity and low permeability make well-log sonic frequencies act as high frequencies and thus invalidate the low frequency assumption of the Gassmann theory. In comparison, for the overlying Marble Falls and underlying Ellenburger carbonates, both the high and low frequency methods predict Vs with reasonable accuracy, which may reveal that sonic frequencies are within the transition frequencies zone due to higher pore connectivity in the surroundings.

  7. RNA-ID, a highly sensitive and robust method to identify cis-regulatory sequences using superfolder GFP and a fluorescence-based assay.

    PubMed

    Dean, Kimberly M; Grayhack, Elizabeth J

    2012-12-01

    We have developed a robust and sensitive method, called RNA-ID, to screen for cis-regulatory sequences in RNA using fluorescence-activated cell sorting (FACS) of yeast cells bearing a reporter in which expression of both superfolder green fluorescent protein (GFP) and yeast codon-optimized mCherry red fluorescent protein (RFP) is driven by the bidirectional GAL1,10 promoter. This method recapitulates previously reported progressive inhibition of translation mediated by increasing numbers of CGA codon pairs, and restoration of expression by introduction of a tRNA with an anticodon that base pairs exactly with the CGA codon. This method also reproduces effects of paromomycin and context on stop codon read-through. Five key features of this method contribute to its effectiveness as a selection for regulatory sequences: The system exhibits greater than a 250-fold dynamic range, a quantitative and dose-dependent response to known inhibitory sequences, exquisite resolution that allows nearly complete physical separation of distinct populations, and a reproducible signal between different cells transformed with the identical reporter, all of which are coupled with simple methods involving ligation-independent cloning, to create large libraries. Moreover, we provide evidence that there are sequences within a 9-nt library that cause reduced GFP fluorescence, suggesting that there are novel cis-regulatory sequences to be found even in this short sequence space. This method is widely applicable to the study of both RNA-mediated and codon-mediated effects on expression.

  8. A study of swing-curve physics in diffraction-based overlay

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Kaustuve; den Boef, Arie; Storms, Greet; van Heijst, Joost; Noot, Marc; An, Kevin; Park, Noh-Kyoung; Jeon, Se-Ra; Oh, Nang-Lyeom; McNamara, Elliott; van de Mast, Frank; Oh, SeungHwa; Lee, Seung Yoon; Hwang, Chan; Lee, Kuntack

    2016-03-01

    With the increase of process complexity in advanced nodes, the requirements of process robustness in overlay metrology continues to tighten. Especially with the introduction of newer materials in the film-stack along with typical stack variations (thickness, optical properties, profile asymmetry etc.), the signal formation physics in diffraction-based overlay (DBO) becomes an important aspect to apply in overlay metrology target and recipe selection. In order to address the signal formation physics, an effort is made towards studying the swing-curve phenomena through wavelength and polarizations on production stacks using simulations as well as experimental technique using DBO. The results provide a wealth of information on target and recipe selection for robustness. Details from simulation and measurements will be reported in this technical publication.

  9. TH-E-201-02: Hands-On Physics Teaching of Residents in Diagnostic Radiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J.

    The ABR Core Examination stresses integrating physics into real-world clinical practice and, accordingly, has shifted its focus from passive recall of facts to active application of physics principles. Physics education of radiology residents poses a challenge. The traditional method of didactic lectures alone is insufficient, yet it is difficult to incorporate physics teaching consistently into clinical rotations due to time constraints. Faced with this challenge, diagnostic medical physicists who teach radiology residents, have been thinking about how to adapt their teaching to the new paradigm, what to teach and meet expectation of the radiology resident and the radiology residency program.more » The proposed lecture attempts to discuss above questions. Newly developed diagnostic radiology residents physics curriculum by the AAPM Imaging Physics Curricula Subcommittee will be reviewed. Initial experience on hands-on physics teaching will be discussed. Radiology resident who will have taken the BAR Core Examination will share the expectation of physics teaching from a resident perspective. The lecture will help develop robust educational approaches to prepare radiology residents for safer and more effective lifelong practice. Learning Objectives: Learn updated physics requirements for radiology residents Pursue effective approaches to teach physics to radiology residents Learn expectation of physics teaching from resident perspective J. Zhang, This topic is partially supported by RSNA Education Scholar Grant.« less

  10. TH-E-201-03: A Radiology Resident’s Perspectives of Physics Teaching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Key, A.

    The ABR Core Examination stresses integrating physics into real-world clinical practice and, accordingly, has shifted its focus from passive recall of facts to active application of physics principles. Physics education of radiology residents poses a challenge. The traditional method of didactic lectures alone is insufficient, yet it is difficult to incorporate physics teaching consistently into clinical rotations due to time constraints. Faced with this challenge, diagnostic medical physicists who teach radiology residents, have been thinking about how to adapt their teaching to the new paradigm, what to teach and meet expectation of the radiology resident and the radiology residency program.more » The proposed lecture attempts to discuss above questions. Newly developed diagnostic radiology residents physics curriculum by the AAPM Imaging Physics Curricula Subcommittee will be reviewed. Initial experience on hands-on physics teaching will be discussed. Radiology resident who will have taken the BAR Core Examination will share the expectation of physics teaching from a resident perspective. The lecture will help develop robust educational approaches to prepare radiology residents for safer and more effective lifelong practice. Learning Objectives: Learn updated physics requirements for radiology residents Pursue effective approaches to teach physics to radiology residents Learn expectation of physics teaching from resident perspective J. Zhang, This topic is partially supported by RSNA Education Scholar Grant.« less

  11. A Robust Dynamic Heart-Rate Detection Algorithm Framework During Intense Physical Activities Using Photoplethysmographic Signals

    PubMed Central

    Song, Jiajia; Li, Dan; Ma, Xiaoyuan; Teng, Guowei; Wei, Jianming

    2017-01-01

    Dynamic accurate heart-rate (HR) estimation using a photoplethysmogram (PPG) during intense physical activities is always challenging due to corruption by motion artifacts (MAs). It is difficult to reconstruct a clean signal and extract HR from contaminated PPG. This paper proposes a robust HR-estimation algorithm framework that uses one-channel PPG and tri-axis acceleration data to reconstruct the PPG and calculate the HR based on features of the PPG and spectral analysis. Firstly, the signal is judged by the presence of MAs. Then, the spectral peaks corresponding to acceleration data are filtered from the periodogram of the PPG when MAs exist. Different signal-processing methods are applied based on the amount of remaining PPG spectral peaks. The main MA-removal algorithm (NFEEMD) includes the repeated single-notch filter and ensemble empirical mode decomposition. Finally, HR calibration is designed to ensure the accuracy of HR tracking. The NFEEMD algorithm was performed on the 23 datasets from the 2015 IEEE Signal Processing Cup Database. The average estimation errors were 1.12 BPM (12 training datasets), 2.63 BPM (10 testing datasets) and 1.87 BPM (all 23 datasets), respectively. The Pearson correlation was 0.992. The experiment results illustrate that the proposed algorithm is not only suitable for HR estimation during continuous activities, like slow running (13 training datasets), but also for intense physical activities with acceleration, like arm exercise (10 testing datasets). PMID:29068403

  12. A Robust Absorbing Boundary Condition for Compressible Flows

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; orgenson, Philip C. E.

    2005-01-01

    An absorbing non-reflecting boundary condition (NRBC) for practical computations in fluid dynamics and aeroacoustics is presented with theoretical proof. This paper is a continuation and improvement of a previous paper by the author. The absorbing NRBC technique is based on a first principle of non reflecting, which contains the essential physics that a plane wave solution of the Euler equations remains intact across the boundary. The technique is theoretically shown to work for a large class of finite volume approaches. When combined with the hyperbolic conservation laws, the NRBC is simple, robust and truly multi-dimensional; no additional implementation is needed except the prescribed physical boundary conditions. Several numerical examples in multi-dimensional spaces using two different finite volume schemes are illustrated to demonstrate its robustness in practical computations. Limitations and remedies of the technique are also discussed.

  13. A Spatial Division Clustering Method and Low Dimensional Feature Extraction Technique Based Indoor Positioning System

    PubMed Central

    Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao

    2014-01-01

    Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect. PMID:24451470

  14. The CE/SE Method: a CFD Framework for the Challenges of the New Millennium

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Yu, Sheng-Tao

    2001-01-01

    The space-time conservation element and solution element (CE/SE) method, which was originated and is continuously being developed at NASA Glenn Research Center, is a high-resolution, genuinely multidimensional and unstructured-mesh compatible numerical method for solving conservation laws. Since its inception in 1991, the CE/SE method has been used to obtain highly accurate numerical solutions for 1D, 2D and 3D flow problems involving shocks, contact discontinuities, acoustic waves, vortices, shock/acoustic waves/vortices interactions, shock/boundary layers interactions and chemical reactions. Without the aid of preconditioning or other special techniques, it has been applied to both steady and unsteady flows with speeds ranging from Mach number = 0.00288 to 10. In addition, the method has unique features that allow for (i) the use of very simple non-reflecting boundary conditions, and (ii) a unified wall boundary treatment for viscous and inviscid flows. The CE/SE method was developed with the conviction that, with a solid foundation in physics, a robust, coherent and accurate numerical framework can be built without involving overly complex mathematics. As a result, the method was constructed using a set of design principles that facilitate simplicity, robustness and accuracy. The most important among them are: (i) enforcing both local and global flux conservation in space and time, with flux evaluation at an interface being an integral part of the solution procedure and requiring no interpolation or extrapolation; (ii) unifying space and time and treating them as a single entity; and (iii) requiring that a numerical scheme be built from a nondissipative core scheme such that the numerical dissipation can be effectively controlled and, as a result, will not overwhelm the physical dissipation. Part I of the workshop will be devoted to a discussion of these principles along with a description of how the ID, 2D and 3D CE/SE schemes are constructed. In Part II, various applications of the CE/SE method, particularly those involving chemical reactions and acoustics, will be presented. The workshop will be concluded with a sketch of the future research directions.

  15. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  16. Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava

    Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Examples include the Intel Xeon Phi, GPGPUs, and similar technologies. Algorithms should accordingly be designed with ample amounts of fine-grained parallelism if they are to realize the full performance of the hardware. This requirement can be challenging for algorithms that are naturally expressed as a sequence of small-matrix operations, such as the Kalman filter methods widely in use in high-energy physics experiments. In the High-Luminosity Large Hadron Collider (HL-LHC), for example, one of the dominant computational problems ismore » expected to be finding and fitting charged-particle tracks during event reconstruction; today, the most common track-finding methods are those based on the Kalman filter. Experience at the LHC, both in the trigger and offline, has shown that these methods are robust and provide high physics performance. Previously we reported the significant parallel speedups that resulted from our efforts to adapt Kalman-filter-based tracking to many-core architectures such as Intel Xeon Phi. Here we report on how effectively those techniques can be applied to more realistic detector configurations and event complexity.« less

  17. Image segmentation-based robust feature extraction for color image watermarking

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  18. A kriging metamodel-assisted robust optimization method based on a reverse model

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao

    2018-02-01

    The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.

  19. A fictitious domain approach for the simulation of dense suspensions

    NASA Astrophysics Data System (ADS)

    Gallier, Stany; Lemaire, Elisabeth; Lobry, Laurent; Peters, François

    2014-01-01

    Low Reynolds number concentrated suspensions do exhibit an intricate physics which can be partly unraveled by the use of numerical simulation. To this end, a Lagrange multiplier-free fictitious domain approach is described in this work. Unlike some methods recently proposed, the present approach is fully Eulerian and therefore does not need any transfer between the Eulerian background grid and some Lagrangian nodes attached to particles. Lubrication forces between particles play an important role in the suspension rheology and have been properly accounted for in the model. A robust and effective lubrication scheme is outlined which consists in transposing the classical approach used in Stokesian Dynamics to our present direct numerical simulation. This lubrication model has also been adapted to account for solid boundaries such as walls. Contact forces between particles are modeled using a classical Discrete Element Method (DEM), a widely used method in granular matter physics. Comprehensive validations are presented on various one-particle, two-particle or three-particle configurations in a linear shear flow as well as some O(103) and O(104) particle simulations.

  20. Algorithms for the computation of solutions of the Ornstein-Zernike equation.

    PubMed

    Peplow, A T; Beardmore, R E; Bresme, F

    2006-10-01

    We introduce a robust and efficient methodology to solve the Ornstein-Zernike integral equation using the pseudoarc length (PAL) continuation method that reformulates the integral equation in an equivalent but nonstandard form. This enables the computation of solutions in regions where the compressibility experiences large changes or where the existence of multiple solutions and so-called branch points prevents Newton's method from converging. We illustrate the use of the algorithm with a difficult problem that arises in the numerical solution of integral equations, namely the evaluation of the so-called no-solution line of the Ornstein-Zernike hypernetted chain (HNC) integral equation for the Lennard-Jones potential. We are able to use the PAL algorithm to solve the integral equation along this line and to connect physical and nonphysical solution branches (both isotherms and isochores) where appropriate. We also show that PAL continuation can compute solutions within the no-solution region that cannot be computed when Newton and Picard methods are applied directly to the integral equation. While many solutions that we find are new, some correspond to states with negative compressibility and consequently are not physical.

  1. SU-E-T-07: 4DCT Robust Optimization for Esophageal Cancer Using Intensity Modulated Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, L; Department of Industrial Engineering, University of Houston, Houston, TX; Yu, J

    2015-06-15

    Purpose: To develop a 4DCT robust optimization method to reduce the dosimetric impact from respiratory motion in intensity modulated proton therapy (IMPT) for esophageal cancer. Methods: Four esophageal cancer patients were selected for this study. The different phases of CT from a set of 4DCT were incorporated into the worst-case dose distribution robust optimization algorithm. 4DCT robust treatment plans were designed and compared with the conventional non-robust plans. Result doses were calculated on the average and maximum inhale/exhale phases of 4DCT. Dose volume histogram (DVH) band graphic and ΔD95%, ΔD98%, ΔD5%, ΔD2% of CTV between different phases were used tomore » evaluate the robustness of the plans. Results: Compare to the IMPT plans optimized using conventional methods, the 4DCT robust IMPT plans can achieve the same quality in nominal cases, while yield a better robustness to breathing motion. The mean ΔD95%, ΔD98%, ΔD5% and ΔD2% of CTV are 6%, 3.2%, 0.9% and 1% for the robustly optimized plans vs. 16.2%, 11.8%, 1.6% and 3.3% from the conventional non-robust plans. Conclusion: A 4DCT robust optimization method was proposed for esophageal cancer using IMPT. We demonstrate that the 4DCT robust optimization can mitigate the dose deviation caused by the diaphragm motion.« less

  2. Physical Activities Monitoring Using Wearable Acceleration Sensors Attached to the Body.

    PubMed

    Arif, Muhammad; Kattan, Ahmed

    2015-01-01

    Monitoring physical activities by using wireless sensors is helpful for identifying postural orientation and movements in the real-life environment. A simple and robust method based on time domain features to identify the physical activities is proposed in this paper; it uses sensors placed on the subjects' wrist, chest and ankle. A feature set based on time domain characteristics of the acceleration signal recorded by acceleration sensors is proposed for the classification of twelve physical activities. Nine subjects performed twelve different types of physical activities, including sitting, standing, walking, running, cycling, Nordic walking, ascending stairs, descending stairs, vacuum cleaning, ironing clothes and jumping rope, and lying down (resting state). Their ages were 27.2 ± 3.3 years and their body mass index (BMI) is 25.11 ± 2.6 Kg/m2. Classification results demonstrated a high validity showing precision (a positive predictive value) and recall (sensitivity) of more than 95% for all physical activities. The overall classification accuracy for a combined feature set of three sensors is 98%. The proposed framework can be used to monitor the physical activities of a subject that can be very useful for the health professional to assess the physical activity of healthy individuals as well as patients.

  3. The Utility of Robust Means in Statistics

    ERIC Educational Resources Information Center

    Goodwyn, Fara

    2012-01-01

    Location estimates calculated from heuristic data were examined using traditional and robust statistical methods. The current paper demonstrates the impact outliers have on the sample mean and proposes robust methods to control for outliers in sample data. Traditional methods fail because they rely on the statistical assumptions of normality and…

  4. Robust operative diagnosis as problem solving in a hypothesis space

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy H.

    1988-01-01

    This paper describes an approach that formulates diagnosis of physical systems in operation as problem solving in a hypothesis space. Such a formulation increases robustness by: (1) incremental hypotheses construction via dynamic inputs, (2) reasoning at a higher level of abstraction to construct hypotheses, and (3) partitioning the space by grouping fault hypotheses according to the type of physical system representation and problem solving techniques used in their construction. It was implemented for a turbofan engine and hydraulic subsystem. Evaluation of the implementation on eight actual aircraft accident cases involving engine faults provided very promising results.

  5. Therapeutic lighting design for the elderly: a review.

    PubMed

    Shikder, Shariful; Mourshed, Monjur; Price, Andrew

    2012-11-01

    Research suggests that specialised lighting design is essential to cater for the elderly users of a building because of reduced visual performance with increased age. This review aims to document what is known of the physical and psychological aspects of lighting and their role in promoting a healthy and safe environment for the elderly. A methodical review was carried out of published literature on the physical and psychological impacts of light on the elderly. Design standards and guides from professional organizations were evaluated to identify synergies and gaps between the evidence base and current practice. Lighting has been identified as a significant environmental attribute responsible for promoting physical and mental health of the elderly. The evidence related to visual performance was found to be robust. However, guides and standards appeared to have focused mostly on illumination requirements for specific tasks and have lacked detailed guidelines on vertical lighting and luminance design. This review has identified a growing body of evidence on the therapeutic benefits of lighting and its use in treating psychological disorders among the elderly. The experiments using light as a therapy have improved our understanding of the underlying principles, but the integration of therapeutic aspects of lighting in design practice and guidelines is lacking. While design guidelines discuss the physical needs of lighting for the elderly fairly well, they lack incorporation of photobiological impacts. Despite positive outcomes from research, the implementation of therapeutic aspects of lighting in buildings is still debatable due to insufficient relevant investigations and robustness of their findings. Collaborations between designers and physicians can contribute in delivering customised lighting solutions by considering disease types and needs. Further investigation needs to be carried out for translating therapeutic benefits to photometric units to implement them in building lighting design.

  6. Electrochemical state and internal variables estimation using a reduced-order physics-based model of a lithium-ion cell and an extended Kalman filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stetzel, KD; Aldrich, LL; Trimboli, MS

    2015-03-15

    This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables.more » (C) 2014 Elsevier B.V. All rights reserved.« less

  7. Robust Satisficing Decision Making for Unmanned Aerial Vehicle Complex Missions under Severe Uncertainty

    PubMed Central

    Ji, Xiaoting; Niu, Yifeng; Shen, Lincheng

    2016-01-01

    This paper presents a robust satisficing decision-making method for Unmanned Aerial Vehicles (UAVs) executing complex missions in an uncertain environment. Motivated by the info-gap decision theory, we formulate this problem as a novel robust satisficing optimization problem, of which the objective is to maximize the robustness while satisfying some desired mission requirements. Specifically, a new info-gap based Markov Decision Process (IMDP) is constructed to abstract the uncertain UAV system and specify the complex mission requirements with the Linear Temporal Logic (LTL). A robust satisficing policy is obtained to maximize the robustness to the uncertain IMDP while ensuring a desired probability of satisfying the LTL specifications. To this end, we propose a two-stage robust satisficing solution strategy which consists of the construction of a product IMDP and the generation of a robust satisficing policy. In the first stage, a product IMDP is constructed by combining the IMDP with an automaton representing the LTL specifications. In the second, an algorithm based on robust dynamic programming is proposed to generate a robust satisficing policy, while an associated robustness evaluation algorithm is presented to evaluate the robustness. Finally, through Monte Carlo simulation, the effectiveness of our algorithms is demonstrated on an UAV search mission under severe uncertainty so that the resulting policy can maximize the robustness while reaching the desired performance level. Furthermore, by comparing the proposed method with other robust decision-making methods, it can be concluded that our policy can tolerate higher uncertainty so that the desired performance level can be guaranteed, which indicates that the proposed method is much more effective in real applications. PMID:27835670

  8. Robust Satisficing Decision Making for Unmanned Aerial Vehicle Complex Missions under Severe Uncertainty.

    PubMed

    Ji, Xiaoting; Niu, Yifeng; Shen, Lincheng

    2016-01-01

    This paper presents a robust satisficing decision-making method for Unmanned Aerial Vehicles (UAVs) executing complex missions in an uncertain environment. Motivated by the info-gap decision theory, we formulate this problem as a novel robust satisficing optimization problem, of which the objective is to maximize the robustness while satisfying some desired mission requirements. Specifically, a new info-gap based Markov Decision Process (IMDP) is constructed to abstract the uncertain UAV system and specify the complex mission requirements with the Linear Temporal Logic (LTL). A robust satisficing policy is obtained to maximize the robustness to the uncertain IMDP while ensuring a desired probability of satisfying the LTL specifications. To this end, we propose a two-stage robust satisficing solution strategy which consists of the construction of a product IMDP and the generation of a robust satisficing policy. In the first stage, a product IMDP is constructed by combining the IMDP with an automaton representing the LTL specifications. In the second, an algorithm based on robust dynamic programming is proposed to generate a robust satisficing policy, while an associated robustness evaluation algorithm is presented to evaluate the robustness. Finally, through Monte Carlo simulation, the effectiveness of our algorithms is demonstrated on an UAV search mission under severe uncertainty so that the resulting policy can maximize the robustness while reaching the desired performance level. Furthermore, by comparing the proposed method with other robust decision-making methods, it can be concluded that our policy can tolerate higher uncertainty so that the desired performance level can be guaranteed, which indicates that the proposed method is much more effective in real applications.

  9. Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saad, Yousef

    2014-01-16

    The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners formore » solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the problem of evaluating f(A)v which arises in statistical sampling. 11. As an application to the methods we developed, we tackled the problem of computing the diagonal of the inverse of a matrix. This arises in statistical applications as well as in many applications in physics. We explored probing methods as well as domain-decomposition type methods. 12. A collaboration with researchers from Toulouse, France, considered the important problem of computing the Schur complement in a domain-decomposition approach. 13. We explored new ways of preconditioning linear systems, based on low-rank approximations.« less

  10. Coronal loop seismology using damping of standing kink oscillations by mode coupling. II. additional physical effects and Bayesian analysis

    NASA Astrophysics Data System (ADS)

    Pascoe, D. J.; Anfinogentov, S.; Nisticò, G.; Goddard, C. R.; Nakariakov, V. M.

    2017-04-01

    Context. The strong damping of kink oscillations of coronal loops can be explained by mode coupling. The damping envelope depends on the transverse density profile of the loop. Observational measurements of the damping envelope have been used to determine the transverse loop structure which is important for understanding other physical processes such as heating. Aims: The general damping envelope describing the mode coupling of kink waves consists of a Gaussian damping regime followed by an exponential damping regime. Recent observational detection of these damping regimes has been employed as a seismological tool. We extend the description of the damping behaviour to account for additional physical effects, namely a time-dependent period of oscillation, the presence of additional longitudinal harmonics, and the decayless regime of standing kink oscillations. Methods: We examine four examples of standing kink oscillations observed by the Atmospheric Imaging Assembly (AIA) onboard the Solar Dynamics Observatory (SDO). We use forward modelling of the loop position and investigate the dependence on the model parameters using Bayesian inference and Markov chain Monte Carlo (MCMC) sampling. Results: Our improvements to the physical model combined with the use of Bayesian inference and MCMC produce improved estimates of model parameters and their uncertainties. Calculation of the Bayes factor also allows us to compare the suitability of different physical models. We also use a new method based on spline interpolation of the zeroes of the oscillation to accurately describe the background trend of the oscillating loop. Conclusions: This powerful and robust method allows for accurate seismology of coronal loops, in particular the transverse density profile, and potentially reveals additional physical effects.

  11. The Bullied and Non-Bullied Child: A Contrast Between the Popular and Unpopular Child.

    ERIC Educational Resources Information Center

    Lowenstein, L. F.

    Characteristics of 32 children who claimed to be bullied were examined and compared to a control group. Teachers and a psychologist rated the Ss on three aspects: physical characteristics (size and weight for age, attractiveness, physical robustness, appropriateness of dress, odd mannerisms or physical handicaps); personal and psychological…

  12. Physical associations to spring phytoplankton biomass interannual variability in the U.S. Northeast Continental Shelf

    NASA Astrophysics Data System (ADS)

    Saba, Vincent S.; Hyde, Kimberly J. W.; Rebuck, Nathan D.; Friedland, Kevin D.; Hare, Jonathan A.; Kahru, Mati; Fogarty, Michael J.

    2015-02-01

    The continental shelf of the Northeast United States and Nova Scotia is a productive marine ecosystem that supports a robust biomass of living marine resources. Understanding marine ecosystem sensitivity to changes in the physical environment can start with the first-order response of phytoplankton (i.e., chlorophyll a), the base of the marine food web. However, the primary physical associations to the interannual variability of chlorophyll a in these waters are unclear. Here we used ocean color satellite measurements and identified the local and remote physical associations to interannual variability of spring surface chlorophyll a from 1998 to 2013. The highest interannual variability of chlorophyll a occurred in March and April on the northern flank of Georges Bank, the western Gulf of Maine, and Nantucket Shoals. Complex interactions between winter wind speed over the Shelf, local winter water levels, and the relative proportions of Atlantic versus Labrador Sea source waters entering the Gulf of Maine from the previous summer/fall were associated with the variability of March/April chlorophyll a in Georges Bank and the Gulf of Maine. Sea surface temperature and sea surface salinity were not robust correlates to spring chlorophyll a. Surface nitrate in the winter was not a robust correlate to chlorophyll a or the physical variables in every case suggesting that nitrate limitation may not be the primary constraint on the interannual variability of the spring bloom throughout all regions. Generalized linear models suggest that we can resolve 88% of March chlorophyll a interannual variability in Georges Bank using lagged physical data.

  13. Modeling Power Systems as Complex Adaptive Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Malard, Joel M.; Posse, Christian

    2004-12-30

    Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We reviewmore » and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.« less

  14. Benefits and shortcomings of non-destructive benthic imagery for monitoring hard-bottom habitats.

    PubMed

    Beisiegel, Kolja; Darr, Alexander; Gogina, Mayya; Zettler, Michael L

    2017-08-15

    Hard-bottom habitats with complex topography and fragile epibenthic communities are still not adequately considered in benthic monitoring programs, despite their potential ecological importance. While indicators of ecosystem health are defined by major EU directives, methods commonly used to measure them are deficient in quantification of biota on hard surfaces. We address the suitability of seafloor imaging for monitoring activities. We compared the ability of high-resolution imagery and physical sampling methods (grab, dredge, SCUBA-diving) to detect taxonomic and functional components of epibenthos. Results reveal that (1) with minimal habitat disturbance on large spatial scales, imagery provides valuable, cost-effective assessment of rocky reef habitat features and community structure, (2) despite poor taxonomic resolution, image-derived data for habitat-forming taxa might be sufficient to infer richness of small sessile and mobile fauna, (3) physical collections are necessary to develop a robust record of species richness, including species-level taxonomic identifications, and to establish a baseline. Copyright © 2017. Published by Elsevier Ltd.

  15. Electromagnetic pulsed thermography for natural cracks inspection

    NASA Astrophysics Data System (ADS)

    Gao, Yunlai; Tian, Gui Yun; Wang, Ping; Wang, Haitao; Gao, Bin; Woo, Wai Lok; Li, Kongjing

    2017-02-01

    Emerging integrated sensing and monitoring of material degradation and cracks are increasingly required for characterizing the structural integrity and safety of infrastructure. However, most conventional nondestructive evaluation (NDE) methods are based on single modality sensing which is not adequate to evaluate structural integrity and natural cracks. This paper proposed electromagnetic pulsed thermography for fast and comprehensive defect characterization. It hybrids multiple physical phenomena i.e. magnetic flux leakage, induced eddy current and induction heating linking to physics as well as signal processing algorithms to provide abundant information of material properties and defects. New features are proposed using 1st derivation that reflects multiphysics spatial and temporal behaviors to enhance the detection of cracks with different orientations. Promising results that robust to lift-off changes and invariant features for artificial and natural cracks detection have been demonstrated that the proposed method significantly improves defect detectability. It opens up multiphysics sensing and integrated NDE with potential impact for natural understanding and better quantitative evaluation of natural cracks including stress corrosion crack (SCC) and rolling contact fatigue (RCF).

  16. Developing Discontinuous Galerkin Methods for Solving Multiphysics Problems in General Relativity

    NASA Astrophysics Data System (ADS)

    Kidder, Lawrence; Field, Scott; Teukolsky, Saul; Foucart, Francois; SXS Collaboration

    2016-03-01

    Multi-messenger observations of the merger of black hole-neutron star and neutron star-neutron star binaries, and of supernova explosions will probe fundamental physics inaccessible to terrestrial experiments. Modeling these systems requires a relativistic treatment of hydrodynamics, including magnetic fields, as well as neutrino transport and nuclear reactions. The accuracy, efficiency, and robustness of current codes that treat all of these problems is not sufficient to keep up with the observational needs. We are building a new numerical code that uses the Discontinuous Galerkin method with a task-based parallelization strategy, a promising combination that will allow multiphysics applications to be treated both accurately and efficiently on petascale and exascale machines. The code will scale to more than 100,000 cores for efficient exploration of the parameter space of potential sources and allowed physics, and the high-fidelity predictions needed to realize the promise of multi-messenger astronomy. I will discuss the current status of the development of this new code.

  17. Emerging technologies for the non-invasive characterization of physical-mechanical properties of tablets.

    PubMed

    Dave, Vivek S; Shahin, Hend I; Youngren-Ortiz, Susanne R; Chougule, Mahavir B; Haware, Rahul V

    2017-10-30

    The density, porosity, breaking force, viscoelastic properties, and the presence or absence of any structural defects or irregularities are important physical-mechanical quality attributes of popular solid dosage forms like tablets. The irregularities associated with these attributes may influence the drug product functionality. Thus, an accurate and efficient characterization of these properties is critical for successful development and manufacturing of a robust tablets. These properties are mainly analyzed and monitored with traditional pharmacopeial and non-pharmacopeial methods. Such methods are associated with several challenges such as lack of spatial resolution, efficiency, or sample-sparing attributes. Recent advances in technology, design, instrumentation, and software have led to the emergence of newer techniques for non-invasive characterization of physical-mechanical properties of tablets. These techniques include near infrared spectroscopy, Raman spectroscopy, X-ray microtomography, nuclear magnetic resonance (NMR) imaging, terahertz pulsed imaging, laser-induced breakdown spectroscopy, and various acoustic- and thermal-based techniques. Such state-of-the-art techniques are currently applied at various stages of development and manufacturing of tablets at industrial scale. Each technique has specific advantages or challenges with respect to operational efficiency and cost, compared to traditional analytical methods. Currently, most of these techniques are used as secondary analytical tools to support the traditional methods in characterizing or monitoring tablet quality attributes. Therefore, further development in the instrumentation and software, and studies on the applications are necessary for their adoption in routine analysis and monitoring of tablet physical-mechanical properties. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. A Multi-Class Proportional Myocontrol Algorithm for Upper Limb Prosthesis Control: Validation in Real-Life Scenarios on Amputees.

    PubMed

    Amsuess, Sebastian; Goebel, Peter; Graimann, Bernhard; Farina, Dario

    2015-09-01

    Functional replacement of upper limbs by means of dexterous prosthetic devices remains a technological challenge. While the mechanical design of prosthetic hands has advanced rapidly, the human-machine interfacing and the control strategies needed for the activation of multiple degrees of freedom are not reliable enough for restoring hand function successfully. Machine learning methods capable of inferring the user intent from EMG signals generated by the activation of the remnant muscles are regarded as a promising solution to this problem. However, the lack of robustness of the current methods impedes their routine clinical application. In this study, we propose a novel algorithm for controlling multiple degrees of freedom sequentially, inherently proportionally and with high robustness, allowing a good level of prosthetic hand function. The control algorithm is based on the spatial linear combinations of amplitude-related EMG signal features. The weighting coefficients in this combination are derived from the optimization criterion of the common spatial patterns filters which allow for maximal discriminability between movements. An important component of the study is the validation of the method which was performed on both able-bodied and amputee subjects who used physical prostheses with customized sockets and performed three standardized functional tests mimicking daily-life activities of varying difficulty. Moreover, the new method was compared in the same conditions with one clinical/industrial and one academic state-of-the-art method. The novel algorithm outperformed significantly the state-of-the-art techniques in both subject groups for tests that required the activation of more than one degree of freedom. Because of the evaluation in real time control on both able-bodied subjects and final users (amputees) wearing physical prostheses, the results obtained allow for the direct extrapolation of the benefits of the proposed method for the end users. In conclusion, the method proposed and validated in real-life use scenarios, allows the practical usability of multifunctional hand prostheses in an intuitive way, with significant advantages with respect to previous systems.

  19. Correlative Microscopy Combining Secondary Ion Mass Spectrometry and Electron Microscopy: Comparison of Intensity-Hue-Saturation and Laplacian Pyramid Methods for Image Fusion.

    PubMed

    Vollnhals, Florian; Audinot, Jean-Nicolas; Wirtz, Tom; Mercier-Bonin, Muriel; Fourquaux, Isabelle; Schroeppel, Birgit; Kraushaar, Udo; Lev-Ram, Varda; Ellisman, Mark H; Eswara, Santhana

    2017-10-17

    Correlative microscopy combining various imaging modalities offers powerful insights into obtaining a comprehensive understanding of physical, chemical, and biological phenomena. In this article, we investigate two approaches for image fusion in the context of combining the inherently lower-resolution chemical images obtained using secondary ion mass spectrometry (SIMS) with the high-resolution ultrastructural images obtained using electron microscopy (EM). We evaluate the image fusion methods with three different case studies selected to broadly represent the typical samples in life science research: (i) histology (unlabeled tissue), (ii) nanotoxicology, and (iii) metabolism (isotopically labeled tissue). We show that the intensity-hue-saturation fusion method often applied for EM-sharpening can result in serious image artifacts, especially in cases where different contrast mechanisms interplay. Here, we introduce and demonstrate Laplacian pyramid fusion as a powerful and more robust alternative method for image fusion. Both physical and technical aspects of correlative image overlay and image fusion specific to SIMS-based correlative microscopy are discussed in detail alongside the advantages, limitations, and the potential artifacts. Quantitative metrics to evaluate the results of image fusion are also discussed.

  20. PHYSICAL-CONSTRAINT-PRESERVING CENTRAL DISCONTINUOUS GALERKIN METHODS FOR SPECIAL RELATIVISTIC HYDRODYNAMICS WITH A GENERAL EQUATION OF STATE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kailiang; Tang, Huazhong, E-mail: wukl@pku.edu.cn, E-mail: hztang@math.pku.edu.cn

    The ideal gas equation of state (EOS) with a constant adiabatic index is a poor approximation for most relativistic astrophysical flows, although it is commonly used in relativistic hydrodynamics (RHD). This paper develops high-order accurate, physical-constraints-preserving (PCP), central, discontinuous Galerkin (DG) methods for the one- and two-dimensional special RHD equations with a general EOS. It is built on our theoretical analysis of the admissible states for RHD and the PCP limiting procedure that enforce the admissibility of central DG solutions. The convexity, scaling invariance, orthogonal invariance, and Lax–Friedrichs splitting property of the admissible state set are first proved with themore » aid of its equivalent form. Then, the high-order central DG methods with the PCP limiting procedure and strong stability-preserving time discretization are proved, to preserve the positivity of the density, pressure, specific internal energy, and the bound of the fluid velocity, maintain high-order accuracy, and be L {sup 1}-stable. The accuracy, robustness, and effectiveness of the proposed methods are demonstrated by several 1D and 2D numerical examples involving large Lorentz factor, strong discontinuities, or low density/pressure, etc.« less

  1. System and method for detection of dispersed broadband signals

    DOEpatents

    Qian, S.; Dunham, M.E.

    1999-06-08

    A system and method for detecting the presence of dispersed broadband signals in real time are disclosed. The present invention utilizes a bank of matched filters for detecting the received dispersed broadband signals. Each matched filter uses a respective robust time template that has been designed to approximate the dispersed broadband signals of interest, and each time template varies across a spectrum of possible dispersed broadband signal time templates. The received dispersed broadband signal x(t) is received by each of the matched filters, and if one or more matches occurs, then the received data is determined to have signal data of interest. This signal data can then be analyzed and/or transmitted to Earth for analysis, as desired. The system and method of the present invention will prove extremely useful in many fields, including satellite communications, plasma physics, and interstellar research. The varying time templates used in the bank of matched filters are determined as follows. The robust time domain template is assumed to take the form w(t)=A(t)cos[l brace]2[phi](t)[r brace]. Since the instantaneous frequency f(t) is known to be equal to the derivative of the phase [phi](t), the trajectory of a joint time-frequency representation of x(t) is used as an approximation of [phi][prime](t). 10 figs.

  2. System and method for detection of dispersed broadband signals

    DOEpatents

    Qian, Shie; Dunham, Mark E.

    1999-06-08

    A system and method for detecting the presence of dispersed broadband signals in real time. The present invention utilizes a bank of matched filters for detecting the received dispersed broadband signals. Each matched filter uses a respective robust time template that has been designed to approximate the dispersed broadband signals of interest, and each time template varies across a spectrum of possible dispersed broadband signal time templates. The received dispersed broadband signal x(t) is received by each of the matched filters, and if one or more matches occurs, then the received data is determined to have signal data of interest. This signal data can then be analyzed and/or transmitted to Earth for analysis, as desired. The system and method of the present invention will prove extremely useful in many fields, including satellite communications, plasma physics, and interstellar research. The varying time templates used in the bank of matched filters are determined as follows. The robust time domain template is assumed to take the form w(t)=A(t)cos{2.phi.(t)}. Since the instantaneous frequency f(t) is known to be equal to the derivative of the phase .phi.(t), the trajectory of a joint time-frequency representation of x(t) is used as an approximation of .phi.'(t).

  3. A Semi-implicit Treatment of Porous Media in Steady-State CFD.

    PubMed

    Domaingo, Andreas; Langmayr, Daniel; Somogyi, Bence; Almbauer, Raimund

    There are many situations in computational fluid dynamics which require the definition of source terms in the Navier-Stokes equations. These source terms not only allow to model the physics of interest but also have a strong impact on the reliability, stability, and convergence of the numerics involved. Therefore, sophisticated numerical approaches exist for the description of such source terms. In this paper, we focus on the source terms present in the Navier-Stokes or Euler equations due to porous media-in particular the Darcy-Forchheimer equation. We introduce a method for the numerical treatment of the source term which is independent of the spatial discretization and based on linearization. In this description, the source term is treated in a fully implicit way whereas the other flow variables can be computed in an implicit or explicit manner. This leads to a more robust description in comparison with a fully explicit approach. The method is well suited to be combined with coarse-grid-CFD on Cartesian grids, which makes it especially favorable for accelerated solution of coupled 1D-3D problems. To demonstrate the applicability and robustness of the proposed method, a proof-of-concept example in 1D, as well as more complex examples in 2D and 3D, is presented.

  4. Quality by Design: Multidimensional exploration of the design space in high performance liquid chromatography method development for better robustness before validation.

    PubMed

    Monks, K; Molnár, I; Rieger, H-J; Bogáti, B; Szabó, E

    2012-04-06

    Robust HPLC separations lead to fewer analysis failures and better method transfer as well as providing an assurance of quality. This work presents the systematic development of an optimal, robust, fast UHPLC method for the simultaneous assay of two APIs of an eye drop sample and their impurities, in accordance with Quality by Design principles. Chromatography software is employed to effectively generate design spaces (Method Operable Design Regions), which are subsequently employed to determine the final method conditions and to evaluate robustness prior to validation. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. SUBJECTIVE SOCIOECONOMIC STATUS AND HEALTH: RELATIONSHIPS RECONSIDERED

    PubMed Central

    Nobles, Jenna; Ritterman Weintraub, Miranda; Adler, Nancy

    2013-01-01

    Subjective status, an individual’s perception of her socioeconomic standing, is a robust predictor of physical health in many societies. To date, competing interpretations of this correlation remain unresolved. Using longitudinal data on 8,430 older adults from the 2000 and 2007 waves of the Indonesia Family Life Survey, we test these oft-cited links. As in other settings, perceived status is a robust predictor of self-rated health, and also of physical functioning and nurse-assessed general health. These relationships persist in the presence of controls for unobserved traits, such as difficult-to-measure aspects of family background and persistent aspects of personality. However, we find evidence that these links likely represent bi-directional effects. Declines in health that accompany aging are robust predictors of declines in perceived socioeconomic status, net of observed changes to the economic profile of respondents. The results thus underscore the social value afforded good health status. PMID:23453318

  6. Light transport on path-space manifolds

    NASA Astrophysics Data System (ADS)

    Jakob, Wenzel Alban

    The pervasive use of computer-generated graphics in our society has led to strict demands on their visual realism. Generally, users of rendering software want their images to look, in various ways, "real", which has been a key driving force towards methods that are based on the physics of light transport. Until recently, industrial practice has relied on a different set of methods that had comparatively little rigorous grounding in physics---but within the last decade, advances in rendering methods and computing power have come together to create a sudden and dramatic shift, in which physics-based methods that were formerly thought impractical have become the standard tool. As a consequence, considerable attention is now devoted towards making these methods as robust as possible. In this context, robustness refers to an algorithm's ability to process arbitrary input without large increases of the rendering time or degradation of the output image. One particularly challenging aspect of robustness entails simulating the precise interaction of light with all the materials that comprise the input scene. This dissertation focuses on one specific group of materials that has fundamentally been the most important source of difficulties in this process. Specular materials, such as glass windows, mirrors or smooth coatings (e.g. on finished wood), account for a significant percentage of the objects that surround us every day. It is perhaps surprising, then, that it is not well-understood how they can be accommodated within the theoretical framework that underlies some of the most sophisticated rendering methods available today. Many of these methods operate using a theoretical framework known as path space integration. But this framework makes no provisions for specular materials: to date, it is not clear how to write down a path space integral involving something as simple as a piece of glass. Although implementations can in practice still render these materials by side-stepping limitations of the theory, they often suffer from unusably slow convergence; improvements to this situation have been hampered by the lack of a thorough theoretical understanding. We address these problems by developing a new theory of path-space light transport which, for the first time, cleanly incorporates specular scattering into the standard framework. Most of the results obtained in the analysis of the ideally smooth case can also be generalized to rendering of glossy materials and volumetric scattering so that this dissertation also provides a powerful new set of tools for dealing with them. The basis of our approach is that each specular material interaction locally collapses the dimension of the space of light paths so that all relevant paths lie on a submanifold of path space. We analyze the high-dimensional differential geometry of this submanifold and use the resulting information to construct an algorithm that is able to "walk" around on it using a simple and efficient equation-solving iteration. This manifold walking algorithm then constitutes the key operation of a new type of Markov Chain Monte Carlo (MCMC) rendering method that computes lighting through very general families of paths that can involve arbitrary combinations of specular, near-specular, glossy, and diffuse surface interactions as well as isotropic or highly anisotropic volume scattering. We demonstrate our implementation on a range of challenging scenes and evaluate it against previous methods.

  7. Robust Variable Selection with Exponential Squared Loss.

    PubMed

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-04-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are [Formula: see text] and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods.

  8. Robust Variable Selection with Exponential Squared Loss

    PubMed Central

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-01-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are n-consistent and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods. PMID:23913996

  9. A crystal plasticity model for slip in hexagonal close packed metals based on discrete dislocation simulations

    NASA Astrophysics Data System (ADS)

    Messner, Mark C.; Rhee, Moono; Arsenlis, Athanasios; Barton, Nathan R.

    2017-06-01

    This work develops a method for calibrating a crystal plasticity model to the results of discrete dislocation (DD) simulations. The crystal model explicitly represents junction formation and annihilation mechanisms and applies these mechanisms to describe hardening in hexagonal close packed metals. The model treats these dislocation mechanisms separately from elastic interactions among populations of dislocations, which the model represents through a conventional strength-interaction matrix. This split between elastic interactions and junction formation mechanisms more accurately reproduces the DD data and results in a multi-scale model that better represents the lower scale physics. The fitting procedure employs concepts of machine learning—feature selection by regularized regression and cross-validation—to develop a robust, physically accurate crystal model. The work also presents a method for ensuring the final, calibrated crystal model respects the physical symmetries of the crystal system. Calibrating the crystal model requires fitting two linear operators: one describing elastic dislocation interactions and another describing junction formation and annihilation dislocation reactions. The structure of these operators in the final, calibrated model reflect the crystal symmetry and slip system geometry of the DD simulations.

  10. Fast and robust method for the determination of microstructure and composition in butadiene, styrene-butadiene, and isoprene rubber by near-infrared spectroscopy.

    PubMed

    Vilmin, Franck; Dussap, Claude; Coste, Nathalie

    2006-06-01

    In the tire industry, synthetic styrene-butadiene rubber (SBR), butadiene rubber (BR), and isoprene rubber (IR) elastomers are essential for conferring to the product its properties of grip and rolling resistance. Their physical properties depend on their chemical composition, i. e., their microstructure and styrene content, which must be accurately controlled. This paper describes a fast, robust, and highly reproducible near-infrared analytical method for the quantitative determination of the microstructure and styrene content. The quantitative models are calculated with the help of pure spectral profiles estimated from a partial least squares (PLS) regression, using (13)C nuclear magnetic resonance (NMR) as the reference method. This versatile approach allows the models to be applied over a large range of compositions, from a single BR to an SBR-IR blend. The resulting quantitative predictions are independent of the sample path length. As a consequence, the sample preparation is solvent free and simplified with a very fast (five minutes) hot filming step of a bulk polymer piece. No precise thickness control is required. Thus, the operator effect becomes negligible and the method is easily transferable. The root mean square error of prediction, depending on the rubber composition, is between 0.7% and 1.3%. The reproducibility standard error is less than 0.2% in every case.

  11. Construction of ground-state preserving sparse lattice models for predictive materials simulations

    NASA Astrophysics Data System (ADS)

    Huang, Wenxuan; Urban, Alexander; Rong, Ziqin; Ding, Zhiwei; Luo, Chuan; Ceder, Gerbrand

    2017-08-01

    First-principles based cluster expansion models are the dominant approach in ab initio thermodynamics of crystalline mixtures enabling the prediction of phase diagrams and novel ground states. However, despite recent advances, the construction of accurate models still requires a careful and time-consuming manual parameter tuning process for ground-state preservation, since this property is not guaranteed by default. In this paper, we present a systematic and mathematically sound method to obtain cluster expansion models that are guaranteed to preserve the ground states of their reference data. The method builds on the recently introduced compressive sensing paradigm for cluster expansion and employs quadratic programming to impose constraints on the model parameters. The robustness of our methodology is illustrated for two lithium transition metal oxides with relevance for Li-ion battery cathodes, i.e., Li2xFe2(1-x)O2 and Li2xTi2(1-x)O2, for which the construction of cluster expansion models with compressive sensing alone has proven to be challenging. We demonstrate that our method not only guarantees ground-state preservation on the set of reference structures used for the model construction, but also show that out-of-sample ground-state preservation up to relatively large supercell size is achievable through a rapidly converging iterative refinement. This method provides a general tool for building robust, compressed and constrained physical models with predictive power.

  12. Designing robust control laws using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Marrison, Chris

    1994-01-01

    The purpose of this research is to create a method of finding practical, robust control laws. The robustness of a controller is judged by Stochastic Robustness metrics and the level of robustness is optimized by searching for design parameters that minimize a robustness cost function.

  13. Factors associated to leisure-time sedentary lifestyle in adults of 1982 birth cohort, Pelotas, Southern Brazil

    PubMed Central

    Azevedo, Mario R; Horta, Bernardo L; Gigante, Denise P; Victora, Cesar G; Barros, Fernando C

    2009-01-01

    OBJECTIVE To assess factors associated to leisure-time physical activity and sedentary lifestyle. METHODS Prospective cohort study of people born in 1982 in the city of Pelotas, southern Brazil. Data were collected at birth and during in a visit in 2004-5 when 77.4% of the cohort were evaluated, making a total of 4,297 people studied. Information about leisure-time physical activity was collected using the International Physical Activity Questionnaire. Sedentary people were defined as those with weekly physical activity below 150 minutes. The following independent variables were studied: gender, skin color, birth weight, family income at birth and income change between birth and 23 years of age. Poisson’s regression with robust adjustment of variance was used for the assessment of risk factors of sedentary lifestyle. RESULTS Men reported 334 min of weekly leisure-time physical activity compared to 112 min among women. The prevalence of sedentary lifestyle was 80.6% in women and 49.2% in men. Scores of physical activity increased as income at birth increased. Those who were currently poor or who became poor during adult life were more sedentary. CONCLUSIONS Leisure-time sedentary lifestyle in young adults was high especially among women. Physical activity during leisure time is determined by current socioeconomic conditions. PMID:19142347

  14. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites.

    PubMed

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-03-08

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.

  15. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites

    PubMed Central

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-01-01

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062

  16. Representation, Classification and Information Fusion for Robust and Efficient Multimodal Human States Recognition

    ERIC Educational Resources Information Center

    Li, Ming

    2013-01-01

    The goal of this work is to enhance the robustness and efficiency of the multimodal human states recognition task. Human states recognition can be considered as a joint term for identifying/verifing various kinds of human related states, such as biometric identity, language spoken, age, gender, emotion, intoxication level, physical activity, vocal…

  17. Robust, Optimal Subsonic Airfoil Shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2014-01-01

    A method has been developed to create an airfoil robust enough to operate satisfactorily in different environments. This method determines a robust, optimal, subsonic airfoil shape, beginning with an arbitrary initial airfoil shape, and imposes the necessary constraints on the design. Also, this method is flexible and extendible to a larger class of requirements and changes in constraints imposed.

  18. Embodied Learning and School-Based Physical Culture: Implications for Professionalism and Practice in Physical Education

    ERIC Educational Resources Information Center

    Thorburn, Malcolm; Stolz, Steven

    2017-01-01

    We write as critical theorists, who consider that in terms of scoping out robust conceptual elaborations which are suitable for contemporary schooling, that physical education has ground to make up connecting theory with practice and practice with theory. We advocate that aspects of existentialism and phenomenology can provide a theoretically…

  19. Modelling the vicious circle between obesity and physical activity in children and adolescents using a bivariate probit model with endogenous regressors.

    PubMed

    Yeh, C-Y; Chen, L-J; Ku, P-W; Chen, C-M

    2015-01-01

    The increasing prevalence of obesity in children and adolescents has become one of the most important public health issues around the world. Lack of physical activity is a risk factor for obesity, while being obese could reduce the likelihood of participating in physical activity. Failing to account for the endogeneity between obesity and physical activity would result in biased estimation. This study investigates the relationship between overweight and physical activity by taking endogeneity into consideration. It develops an endogenous bivariate probit model estimated by the maximum likelihood method. The data included 4008 boys and 4197 girls in the 5th-9th grades in Taiwan in 2007-2008. The relationship between overweight and physical activity is significantly negative in the endogenous model, but insignificant in the comparative exogenous model. This endogenous relationship presents a vicious circle in which lower levels of physical activity lead to overweight, while those who are already overweight engage in less physical activity. The results not only reveal the importance of endogenous treatment, but also demonstrate the robust negative relationship between these two factors. An emphasis should be put on overweight and obese children and adolescents in order to break the vicious circle. Promotion of physical activity by appropriate counselling programmes and peer support could be effective in reducing the prevalence of obesity in children and adolescents.

  20. Optimal and robust control of quantum state transfer by shaping the spectral phase of ultrafast laser pulses.

    PubMed

    Guo, Yu; Dong, Daoyi; Shu, Chuan-Cun

    2018-04-04

    Achieving fast and efficient quantum state transfer is a fundamental task in physics, chemistry and quantum information science. However, the successful implementation of the perfect quantum state transfer also requires robustness under practically inevitable perturbative defects. Here, we demonstrate how an optimal and robust quantum state transfer can be achieved by shaping the spectral phase of an ultrafast laser pulse in the framework of frequency domain quantum optimal control theory. Our numerical simulations of the single dibenzoterrylene molecule as well as in atomic rubidium show that optimal and robust quantum state transfer via spectral phase modulated laser pulses can be achieved by incorporating a filtering function of the frequency into the optimization algorithm, which in turn has potential applications for ultrafast robust control of photochemical reactions.

  1. Obesogenic environments: a systematic review of the association between the physical environment and adult weight status, the SPOTLIGHT project

    PubMed Central

    2014-01-01

    Background Understanding which physical environmental factors affect adult obesity, and how best to influence them, is important for public health and urban planning. Previous attempts to summarise the literature have not systematically assessed the methodological quality of included studies, or accounted for environmental differences between continents or the ways in which environmental characteristics were measured. Methods We have conducted an updated review of the scientific literature on associations of physical environmental factors with adult weight status, stratified by continent and mode of measurement, accompanied by a detailed risk-of-bias assessment. Five databases were systematically searched for studies published between 1995 and 2013. Results Two factors, urban sprawl and land use mix, were found consistently associated with weight status, although only in North America. Conclusions With the exception of urban sprawl and land use mix in the US the results of the current review confirm that the available research does not allow robust identification of ways in which that physical environment influences adult weight status, even after taking into account methodological quality. PMID:24602291

  2. WE-D-BRB-00: Basics of Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The goal of this session is to review the physics of proton therapy, treatment planning techniques, and the use of volumetric imaging in proton therapy. The course material covers the physics of proton interaction with matter and physical characteristics of clinical proton beams. It will provide information on proton delivery systems and beam delivery techniques for double scattering (DS), uniform scanning (US), and pencil beam scanning (PBS). The session covers the treatment planning strategies used in DS, US, and PBS for various anatomical sites, methods to address uncertainties in proton therapy and uncertainty mitigation to generate robust treatment plans. Itmore » introduces the audience to the current status of image guided proton therapy and clinical applications of CBCT for proton therapy. It outlines the importance of volumetric imaging in proton therapy. Learning Objectives: Gain knowledge in proton therapy physics, and treatment planning for proton therapy including intensity modulated proton therapy. The current state of volumetric image guidance equipment in proton therapy. Clinical applications of CBCT and its advantage over orthogonal imaging for proton therapy. B. Teo, B.K Teo had received travel funds from IBA in 2015.« less

  3. A feasibility study of a culturally and gender-specific dance to promote physical activity for South Asian immigrant women in the greater Toronto area.

    PubMed

    Vahabi, Mandana; Damba, Cynthia

    2015-01-01

    Despite ample evidence demonstrating the protective effect of physical activity, the uptake of regular physical activity among South Asian (SA) women remains relatively low. The purpose of this study was to explore the feasibility and health impacts of implementing a culture- and gender-specific physical activity among SA immigrant women residing in Greater Toronto Area (GTA) in Ontario, Canada. A community-based mixed methods approach combining cohort pretest and posttest design and qualitative methods employing in depth interviews was used. Twenty-seven SA women from the GTA participated in a 6-week, 2 days per week, Bollywood Dance exercise program led by a female SA instructor. The participation rate was considerably high (85%) and approximately 82% of the participants attended 10 or more of the classes offered. The participants' physical measurements (weight, waist and hip, and body mass index) decreased, although not significantly, over the 6-week period and there was an improvement in their physical, mental, and social health. During the face-to-face interviews, participants reported feeling less stressed and tired, being more mentally and physically robust, and having a sense of fulfillment and self-satisfaction. The only common criticism expressed was that the 6-week duration of the intervention was too short. The results showed that the Bollywood Dance was a feasible strategy in engaging SA immigrant women in physical activity. The key aspects when designing culture- and gender-specific dance interventions include community participation and active engagement in planning and implementation of the program, a supportive environment, same gender and culturally attuned dance instructor, easy access, and minimal to no cost. Copyright © 2015 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.

  4. Computational microscopy: illumination coding and nonlinear optimization enables gigapixel 3D phase imaging

    NASA Astrophysics Data System (ADS)

    Tian, Lei; Waller, Laura

    2017-05-01

    Microscope lenses can have either large field of view (FOV) or high resolution, not both. Computational microscopy based on illumination coding circumvents this limit by fusing images from different illumination angles using nonlinear optimization algorithms. The result is a Gigapixel-scale image having both wide FOV and high resolution. We demonstrate an experimentally robust reconstruction algorithm based on a 2nd order quasi-Newton's method, combined with a novel phase initialization scheme. To further extend the Gigapixel imaging capability to 3D, we develop a reconstruction method to process the 4D light field measurements from sequential illumination scanning. The algorithm is based on a 'multislice' forward model that incorporates both 3D phase and diffraction effects, as well as multiple forward scatterings. To solve the inverse problem, an iterative update procedure that combines both phase retrieval and 'error back-propagation' is developed. To avoid local minimum solutions, we further develop a novel physical model-based initialization technique that accounts for both the geometric-optic and 1st order phase effects. The result is robust reconstructions of Gigapixel 3D phase images having both wide FOV and super resolution in all three dimensions. Experimental results from an LED array microscope were demonstrated.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fertig, Fabian, E-mail: fabian.fertig@ise.fraunhofer.de; Greulich, Johannes; Rein, Stefan

    Spatially resolved determination of solar cell parameters is beneficial for loss analysis and optimization of conversion efficiency. One key parameter that has been challenging to access by an imaging technique on solar cell level is short-circuit current density. This work discusses the robustness of a recently suggested approach to determine short-circuit current density spatially resolved based on a series of lock-in thermography images and options for a simplified image acquisition procedure. For an accurate result, one or two emissivity-corrected illuminated lock-in thermography images and one dark lock-in thermography image have to be recorded. The dark lock-in thermography image can bemore » omitted if local shunts are negligible. Furthermore, it is shown that omitting the correction of lock-in thermography images for local emissivity variations only leads to minor distortions for standard silicon solar cells. Hence, adequate acquisition of one image only is sufficient to generate a meaningful map of short-circuit current density. Beyond that, this work illustrates the underlying physics of the recently proposed method and demonstrates its robustness concerning varying excitation conditions and locally increased series resistance. Experimentally gained short-circuit current density images are validated for monochromatic illumination in comparison to the reference method of light-beam induced current.« less

  6. Structural Damage Detection Using Slopes of Longitudinal Vibration Shapes

    DOE PAGES

    Xu, W.; Zhu, W. D.; Smith, S. A.; ...

    2016-03-18

    While structural damage detection based on flexural vibration shapes, such as mode shapes and steady-state response shapes under harmonic excitation, has been well developed, little attention is paid to that based on longitudinal vibration shapes that also contain damage information. This study originally formulates a slope vibration shape for damage detection in bars using longitudinal vibration shapes. To enhance noise robustness of the method, a slope vibration shape is transformed to a multiscale slope vibration shape in a multiscale domain using wavelet transform, which has explicit physical implication, high damage sensitivity, and noise robustness. These advantages are demonstrated in numericalmore » cases of damaged bars, and results show that multiscale slope vibration shapes can be used for identifying and locating damage in a noisy environment. A three-dimensional (3D) scanning laser vibrometer is used to measure the longitudinal steady-state response shape of an aluminum bar with damage due to reduced cross-sectional dimensions under harmonic excitation, and results show that the method can successfully identify and locate the damage. Slopes of longitudinal vibration shapes are shown to be suitable for damage detection in bars and have potential for applications in noisy environments.« less

  7. Robust controller designs for second-order dynamic system: A virtual passive approach

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh

    1990-01-01

    A robust controller design is presented for second-order dynamic systems. The controller is model-independent and itself is a virtual second-order dynamic system. Conditions on actuator and sensor placements are identified for controller designs that guarantee overall closed-loop stability. The dynamic controller can be viewed as a virtual passive damping system that serves to stabilize the actual dynamic system. The control gains are interpreted as virtual mass, spring, and dashpot elements that play the same roles as actual physical elements in stability analysis. Position, velocity, and acceleration feedback are considered. Simple examples are provided to illustrate the physical meaning of this controller design.

  8. On the Quality of Velocity Interpolation Schemes for Marker-in-Cell Method and Staggered Grids

    NASA Astrophysics Data System (ADS)

    Pusok, Adina E.; Kaus, Boris J. P.; Popov, Anton A.

    2017-03-01

    The marker-in-cell method is generally considered a flexible and robust method to model the advection of heterogenous non-diffusive properties (i.e., rock type or composition) in geodynamic problems. In this method, Lagrangian points carrying compositional information are advected with the ambient velocity field on an Eulerian grid. However, velocity interpolation from grid points to marker locations is often performed without considering the divergence of the velocity field at the interpolated locations (i.e., non-conservative). Such interpolation schemes can induce non-physical clustering of markers when strong velocity gradients are present (Journal of Computational Physics 166:218-252, 2001) and this may, eventually, result in empty grid cells, a serious numerical violation of the marker-in-cell method. To remedy this at low computational costs, Jenny et al. (Journal of Computational Physics 166:218-252, 2001) and Meyer and Jenny (Proceedings in Applied Mathematics and Mechanics 4:466-467, 2004) proposed a simple, conservative velocity interpolation scheme for 2-D staggered grid, while Wang et al. (Geochemistry, Geophysics, Geosystems 16(6):2015-2023, 2015) extended the formulation to 3-D finite element methods. Here, we adapt this formulation for 3-D staggered grids (correction interpolation) and we report on the quality of various velocity interpolation methods for 2-D and 3-D staggered grids. We test the interpolation schemes in combination with different advection schemes on incompressible Stokes problems with strong velocity gradients, which are discretized using a finite difference method. Our results suggest that a conservative formulation reduces the dispersion and clustering of markers, minimizing the need of unphysical marker control in geodynamic models.

  9. WE-D-BRB-01: Basic Physics of Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arjomandy, B.

    The goal of this session is to review the physics of proton therapy, treatment planning techniques, and the use of volumetric imaging in proton therapy. The course material covers the physics of proton interaction with matter and physical characteristics of clinical proton beams. It will provide information on proton delivery systems and beam delivery techniques for double scattering (DS), uniform scanning (US), and pencil beam scanning (PBS). The session covers the treatment planning strategies used in DS, US, and PBS for various anatomical sites, methods to address uncertainties in proton therapy and uncertainty mitigation to generate robust treatment plans. Itmore » introduces the audience to the current status of image guided proton therapy and clinical applications of CBCT for proton therapy. It outlines the importance of volumetric imaging in proton therapy. Learning Objectives: Gain knowledge in proton therapy physics, and treatment planning for proton therapy including intensity modulated proton therapy. The current state of volumetric image guidance equipment in proton therapy. Clinical applications of CBCT and its advantage over orthogonal imaging for proton therapy. B. Teo, B.K Teo had received travel funds from IBA in 2015.« less

  10. Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data

    NASA Astrophysics Data System (ADS)

    Shulenin, V. P.

    2016-10-01

    Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.

  11. Anger Expression, Momentary Anger, and Symptom Severity in Patients with Chronic Disease

    PubMed Central

    Russell, Michael A.; Smith, Timothy W.; Smyth, Joshua M.

    2015-01-01

    Background Anger expression styles are associated with physical health, and may affect health by modulating anger experience in daily life. Research examining this process in the daily lives of clinically relevant populations, such as patients with chronic disease, is needed. Method Community adults with asthma (N=97) or rheumatoid arthritis (RA; N=31) completed measures of trait-level anger expression styles (anger-in and anger-out), followed by ecological momentary assessments of anger and physical health 5 times daily for 7 days. Results High anger-in predicted greater momentary anger, physical limitations, and greater asthma symptoms. High anger-out predicted reduced RA symptoms. Momentary anger was robustly associated with more severe symptoms in daily life. Three-way interactions showed anger-in moderated these momentary anger-symptom associations more consistently in men. Conclusions Anger expression styles, particularly anger-in, may affect the day-to-day adjustment of patients with chronic disease in part by altering the dimensions of everyday anger experience, in ways that appear to differ by gender. PMID:26493555

  12. Addressing the vulnerabilities of pass-thoughts

    NASA Astrophysics Data System (ADS)

    Fernandez, Gabriel C.; Danko, Amanda S.

    2016-05-01

    As biometrics become increasingly pervasive, consumer electronics are reaping the benefits of improved authentication methods. Leveraging the physical characteristics of a user reduces the burden of setting and remembering complex passwords, while enabling stronger security. Multi-factor systems lend further credence to this model, increasing security via multiple passive data points. In recent years, brainwaves have been shown to be another feasible source for biometric authentication. Physically unique to an individual in certain circumstances, the signals can also be changed by the user at will, making them more robust than static physical characteristics. No paradigm is impervious however, and even well-established medical technologies have deficiencies. In this work, a system for biometric authentication via brainwaves is constructed with electroencephalography (EEG). The efficacy of EEG biometrics via existing consumer electronics is evaluated, and vulnerabilities of such a system are enumerated. Impersonation attacks are performed to expose the extent to which the system is vulnerable. Finally, a multimodal system combining EEG with additional factors is recommended and outlined.

  13. Robust Smoothing: Smoothing Parameter Selection and Applications to Fluorescence Spectroscopy∂

    PubMed Central

    Lee, Jong Soo; Cox, Dennis D.

    2009-01-01

    Fluorescence spectroscopy has emerged in recent years as an effective way to detect cervical cancer. Investigation of the data preprocessing stage uncovered a need for a robust smoothing to extract the signal from the noise. Various robust smoothing methods for estimating fluorescence emission spectra are compared and data driven methods for the selection of smoothing parameter are suggested. The methods currently implemented in R for smoothing parameter selection proved to be unsatisfactory, and a computationally efficient procedure that approximates robust leave-one-out cross validation is presented. PMID:20729976

  14. Robust keyword retrieval method for OCRed text

    NASA Astrophysics Data System (ADS)

    Fujii, Yusaku; Takebe, Hiroaki; Tanaka, Hiroshi; Hotta, Yoshinobu

    2011-01-01

    Document management systems have become important because of the growing popularity of electronic filing of documents and scanning of books, magazines, manuals, etc., through a scanner or a digital camera, for storage or reading on a PC or an electronic book. Text information acquired by optical character recognition (OCR) is usually added to the electronic documents for document retrieval. Since texts generated by OCR generally include character recognition errors, robust retrieval methods have been introduced to overcome this problem. In this paper, we propose a retrieval method that is robust against both character segmentation and recognition errors. In the proposed method, the insertion of noise characters and dropping of characters in the keyword retrieval enables robustness against character segmentation errors, and character substitution in the keyword of the recognition candidate for each character in OCR or any other character enables robustness against character recognition errors. The recall rate of the proposed method was 15% higher than that of the conventional method. However, the precision rate was 64% lower.

  15. An exact general remeshing scheme applied to physically conservative voxelization

    DOE PAGES

    Powell, Devon; Abel, Tom

    2015-05-21

    We present an exact general remeshing scheme to compute analytic integrals of polynomial functions over the intersections between convex polyhedral cells of old and new meshes. In physics applications this allows one to ensure global mass, momentum, and energy conservation while applying higher-order polynomial interpolation. We elaborate on applications of our algorithm arising in the analysis of cosmological N-body data, computer graphics, and continuum mechanics problems. We focus on the particular case of remeshing tetrahedral cells onto a Cartesian grid such that the volume integral of the polynomial density function given on the input mesh is guaranteed to equal themore » corresponding integral over the output mesh. We refer to this as “physically conservative voxelization.” At the core of our method is an algorithm for intersecting two convex polyhedra by successively clipping one against the faces of the other. This algorithm is an implementation of the ideas presented abstractly by Sugihara [48], who suggests using the planar graph representations of convex polyhedra to ensure topological consistency of the output. This makes our implementation robust to geometric degeneracy in the input. We employ a simplicial decomposition to calculate moment integrals up to quadratic order over the resulting intersection domain. We also address practical issues arising in a software implementation, including numerical stability in geometric calculations, management of cancellation errors, and extension to two dimensions. In a comparison to recent work, we show substantial performance gains. We provide a C implementation intended to be a fast, accurate, and robust tool for geometric calculations on polyhedral mesh elements.« less

  16. Cross-validation of the very short form of the Physical Self-Inventory (PSI-VS): invariance across genders, age groups, ethnicities and weight statuses.

    PubMed

    Morin, Alexandre J S; Maïano, Christophe

    2011-09-01

    In a recent review of various physical self-concept instruments, Marsh and Cheng (in press) noted that the very short 12-item version of the French Physical Self-Inventory (PSI-VS) represents an important contribution to applied research but that further research was needed to investigate the robustness of its psychometric properties in new and diversified samples. The present study was designed to answer these questions based on a sample of 1103 normally achieving French adolescents. The results show that the PSI-VS measurement model is quite robust and fully invariant across subgroups of students formed according to gender, weight, age and ethnicity. The results also confirm the convergent validity and scale score reliability of the PSI-VS subscales. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.

    PubMed

    Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona

    2016-05-31

    Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.

  18. Integrating 3D geological information with a national physically-based hydrological modelling system

    NASA Astrophysics Data System (ADS)

    Lewis, Elizabeth; Parkin, Geoff; Kessler, Holger; Whiteman, Mark

    2016-04-01

    Robust numerical models are an essential tool for informing flood and water management and policy around the world. Physically-based hydrological models have traditionally not been used for such applications due to prohibitively large data, time and computational resource requirements. Given recent advances in computing power and data availability, a robust, physically-based hydrological modelling system for Great Britain using the SHETRAN model and national datasets has been created. Such a model has several advantages over less complex systems. Firstly, compared with conceptual models, a national physically-based model is more readily applicable to ungauged catchments, in which hydrological predictions are also required. Secondly, the results of a physically-based system may be more robust under changing conditions such as climate and land cover, as physical processes and relationships are explicitly accounted for. Finally, a fully integrated surface and subsurface model such as SHETRAN offers a wider range of applications compared with simpler schemes, such as assessments of groundwater resources, sediment and nutrient transport and flooding from multiple sources. As such, SHETRAN provides a robust means of simulating numerous terrestrial system processes which will add physical realism when coupled to the JULES land surface model. 306 catchments spanning Great Britain have been modelled using this system. The standard configuration of this system performs satisfactorily (NSE > 0.5) for 72% of catchments and well (NSE > 0.7) for 48%. Many of the remaining 28% of catchments that performed relatively poorly (NSE < 0.5) are located in the chalk in the south east of England. As such, the British Geological Survey 3D geology model for Great Britain (GB3D) has been incorporated, for the first time in any hydrological model, to pave the way for improvements to be made to simulations of catchments with important groundwater regimes. This coupling has involved development of software to allow for easy incorporation of geological information into SHETRAN for any model setup. The addition of more realistic subsurface representation following this approach is shown to greatly improve model performance in areas dominated by groundwater processes. The resulting modelling system has great potential to be used as a resource at national, regional and local scales in an array of different applications, including climate change impact assessments, land cover change studies and integrated assessments of groundwater and surface water resources.

  19. 640-Gbit/s fast physical random number generation using a broadband chaotic semiconductor laser

    NASA Astrophysics Data System (ADS)

    Zhang, Limeng; Pan, Biwei; Chen, Guangcan; Guo, Lu; Lu, Dan; Zhao, Lingjuan; Wang, Wei

    2017-04-01

    An ultra-fast physical random number generator is demonstrated utilizing a photonic integrated device based broadband chaotic source with a simple post data processing method. The compact chaotic source is implemented by using a monolithic integrated dual-mode amplified feedback laser (AFL) with self-injection, where a robust chaotic signal with RF frequency coverage of above 50 GHz and flatness of ±3.6 dB is generated. By using 4-least significant bits (LSBs) retaining from the 8-bit digitization of the chaotic waveform, random sequences with a bit-rate up to 640 Gbit/s (160 GS/s × 4 bits) are realized. The generated random bits have passed each of the fifteen NIST statistics tests (NIST SP800-22), indicating its randomness for practical applications.

  20. Alignment Solution for CT Image Reconstruction using Fixed Point and Virtual Rotation Axis.

    PubMed

    Jun, Kyungtaek; Yoon, Seokhwan

    2017-01-25

    Since X-ray tomography is now widely adopted in many different areas, it becomes more crucial to find a robust routine of handling tomographic data to get better quality of reconstructions. Though there are several existing techniques, it seems helpful to have a more automated method to remove the possible errors that hinder clearer image reconstruction. Here, we proposed an alternative method and new algorithm using the sinogram and the fixed point. An advanced physical concept of Center of Attenuation (CA) was also introduced to figure out how this fixed point is applied to the reconstruction of image having errors we categorized in this article. Our technique showed a promising performance in restoring images having translation and vertical tilt errors.

  1. Model-based traction force microscopy reveals differential tension in cellular actin bundles.

    PubMed

    Soiné, Jérôme R D; Brand, Christoph A; Stricker, Jonathan; Oakes, Patrick W; Gardel, Margaret L; Schwarz, Ulrich S

    2015-03-01

    Adherent cells use forces at the cell-substrate interface to sense and respond to the physical properties of their environment. These cell forces can be measured with traction force microscopy which inverts the equations of elasticity theory to calculate them from the deformations of soft polymer substrates. We introduce a new type of traction force microscopy that in contrast to traditional methods uses additional image data for cytoskeleton and adhesion structures and a biophysical model to improve the robustness of the inverse procedure and abolishes the need for regularization. We use this method to demonstrate that ventral stress fibers of U2OS-cells are typically under higher mechanical tension than dorsal stress fibers or transverse arcs.

  2. Model-based Traction Force Microscopy Reveals Differential Tension in Cellular Actin Bundles

    PubMed Central

    Soiné, Jérôme R. D.; Brand, Christoph A.; Stricker, Jonathan; Oakes, Patrick W.; Gardel, Margaret L.; Schwarz, Ulrich S.

    2015-01-01

    Adherent cells use forces at the cell-substrate interface to sense and respond to the physical properties of their environment. These cell forces can be measured with traction force microscopy which inverts the equations of elasticity theory to calculate them from the deformations of soft polymer substrates. We introduce a new type of traction force microscopy that in contrast to traditional methods uses additional image data for cytoskeleton and adhesion structures and a biophysical model to improve the robustness of the inverse procedure and abolishes the need for regularization. We use this method to demonstrate that ventral stress fibers of U2OS-cells are typically under higher mechanical tension than dorsal stress fibers or transverse arcs. PMID:25748431

  3. Health-related fitness profiles in adolescents with complex congenital heart disease.

    PubMed

    Klausen, Susanne Hwiid; Wetterslev, Jørn; Søndergaard, Lars; Andersen, Lars L; Mikkelsen, Ulla Ramer; Dideriksen, Kasper; Zoffmann, Vibeke; Moons, Philip

    2015-04-01

    This study investigates whether subgroups of different health-related fitness (HrF) profiles exist among girls and boys with complex congenital heart disease (ConHD) and how these are associated with lifestyle behaviors. We measured the cardiorespiratory fitness, muscle strength, and body composition of 158 adolescents aged 13-16 years with previous surgery for a complex ConHD. Data on lifestyle behaviors were collected concomitantly between October 2010 and April 2013. A cluster analysis was conducted to identify profiles with similar HrF. For comparisons between clusters, multivariate analyses of covariance were used to test the differences in lifestyle behaviors. Three distinct profiles were formed: (1) Robust (43, 27%; 20 girls and 23 boys); (2) Moderately Robust (85, 54%; 37 girls and 48 boys); and (3) Less robust (30, 19%; 9 girls and 21 boys). The participants in the Robust clusters reported leading a physically active lifestyle and participants in the Less robust cluster reported leading a sedentary lifestyle. Diagnoses were evenly distributed between clusters. The cluster analysis attributed some of the variability in cardiorespiratory fitness among adolescents with complex ConHD to lifestyle behaviors and physical activity. Profiling of HrF offers a valuable new option in the management of person-centered health promotion. Copyright © 2015 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  4. Variations on the Game of Life

    NASA Astrophysics Data System (ADS)

    Peper, Ferdinand; Adachi, Susumu; Lee, Jia

    The Game of Life is defined in the framework of Cellular Automata with discrete states that are updated synchronously. Though this in itself has proven to be fertile ground for research, it leaves open questions regarding the robustness of the model with respect to variations in updating methods, cell state representations, neighborhood definitions, etc. These questions may become important when the ideal conditions under which the Game of Life is supposed to operate cannot be satisfied, like in physical realizations. This chapter describes three models in which Game of Life-like behavior is obtained, even though some basic tenets are violated.

  5. Problem Solving, Scaffolding and Learning

    ERIC Educational Resources Information Center

    Lin, Shih-Yin

    2012-01-01

    Helping students to construct robust understanding of physics concepts and develop good solving skills is a central goal in many physics classrooms. This thesis examine students' problem solving abilities from different perspectives and explores strategies to scaffold students' learning. In studies involving analogical problem solving…

  6. Robust optimization methods for cardiac sparing in tangential breast IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahmoudzadeh, Houra, E-mail: houra@mie.utoronto.ca; Lee, Jenny; Chan, Timothy C. Y.

    Purpose: In left-sided tangential breast intensity modulated radiation therapy (IMRT), the heart may enter the radiation field and receive excessive radiation while the patient is breathing. The patient’s breathing pattern is often irregular and unpredictable. We verify the clinical applicability of a heart-sparing robust optimization approach for breast IMRT. We compare robust optimized plans with clinical plans at free-breathing and clinical plans at deep inspiration breath-hold (DIBH) using active breathing control (ABC). Methods: Eight patients were included in the study with each patient simulated using 4D-CT. The 4D-CT image acquisition generated ten breathing phase datasets. An average scan was constructedmore » using all the phase datasets. Two of the eight patients were also imaged at breath-hold using ABC. The 4D-CT datasets were used to calculate the accumulated dose for robust optimized and clinical plans based on deformable registration. We generated a set of simulated breathing probability mass functions, which represent the fraction of time patients spend in different breathing phases. The robust optimization method was applied to each patient using a set of dose-influence matrices extracted from the 4D-CT data and a model of the breathing motion uncertainty. The goal of the optimization models was to minimize the dose to the heart while ensuring dose constraints on the target were achieved under breathing motion uncertainty. Results: Robust optimized plans were improved or equivalent to the clinical plans in terms of heart sparing for all patients studied. The robust method reduced the accumulated heart dose (D10cc) by up to 801 cGy compared to the clinical method while also improving the coverage of the accumulated whole breast target volume. On average, the robust method reduced the heart dose (D10cc) by 364 cGy and improved the optBreast dose (D99%) by 477 cGy. In addition, the robust method had smaller deviations from the planned dose to the accumulated dose. The deviation of the accumulated dose from the planned dose for the optBreast (D99%) was 12 cGy for robust versus 445 cGy for clinical. The deviation for the heart (D10cc) was 41 cGy for robust and 320 cGy for clinical. Conclusions: The robust optimization approach can reduce heart dose compared to the clinical method at free-breathing and can potentially reduce the need for breath-hold techniques.« less

  7. Anchor-Free Localization Method for Mobile Targets in Coal Mine Wireless Sensor Networks

    PubMed Central

    Pei, Zhongmin; Deng, Zhidong; Xu, Shuo; Xu, Xiao

    2009-01-01

    Severe natural conditions and complex terrain make it difficult to apply precise localization in underground mines. In this paper, an anchor-free localization method for mobile targets is proposed based on non-metric multi-dimensional scaling (Multi-dimensional Scaling: MDS) and rank sequence. Firstly, a coal mine wireless sensor network is constructed in underground mines based on the ZigBee technology. Then a non-metric MDS algorithm is imported to estimate the reference nodes’ location. Finally, an improved sequence-based localization algorithm is presented to complete precise localization for mobile targets. The proposed method is tested through simulations with 100 nodes, outdoor experiments with 15 ZigBee physical nodes, and the experiments in the mine gas explosion laboratory with 12 ZigBee nodes. Experimental results show that our method has better localization accuracy and is more robust in underground mines. PMID:22574048

  8. Anchor-free localization method for mobile targets in coal mine wireless sensor networks.

    PubMed

    Pei, Zhongmin; Deng, Zhidong; Xu, Shuo; Xu, Xiao

    2009-01-01

    Severe natural conditions and complex terrain make it difficult to apply precise localization in underground mines. In this paper, an anchor-free localization method for mobile targets is proposed based on non-metric multi-dimensional scaling (Multi-dimensional Scaling: MDS) and rank sequence. Firstly, a coal mine wireless sensor network is constructed in underground mines based on the ZigBee technology. Then a non-metric MDS algorithm is imported to estimate the reference nodes' location. Finally, an improved sequence-based localization algorithm is presented to complete precise localization for mobile targets. The proposed method is tested through simulations with 100 nodes, outdoor experiments with 15 ZigBee physical nodes, and the experiments in the mine gas explosion laboratory with 12 ZigBee nodes. Experimental results show that our method has better localization accuracy and is more robust in underground mines.

  9. Evaluating the compatibility of multi-functional and intensive urban land uses

    NASA Astrophysics Data System (ADS)

    Taleai, M.; Sharifi, A.; Sliuzas, R.; Mesgari, M.

    2007-12-01

    This research is aimed at developing a model for assessing land use compatibility in densely built-up urban areas. In this process, a new model was developed through the combination of a suite of existing methods and tools: geographical information system, Delphi methods and spatial decision support tools: namely multi-criteria evaluation analysis, analytical hierarchy process and ordered weighted average method. The developed model has the potential to calculate land use compatibility in both horizontal and vertical directions. Furthermore, the compatibility between the use of each floor in a building and its neighboring land uses can be evaluated. The method was tested in a built-up urban area located in Tehran, the capital city of Iran. The results show that the model is robust in clarifying different levels of physical compatibility between neighboring land uses. This paper describes the various steps and processes of developing the proposed land use compatibility evaluation model (CEM).

  10. Decision-Level Fusion of Spatially Scattered Multi-Modal Data for Nondestructive Inspection of Surface Defects

    PubMed Central

    Heideklang, René; Shokouhi, Parisa

    2016-01-01

    This article focuses on the fusion of flaw indications from multi-sensor nondestructive materials testing. Because each testing method makes use of a different physical principle, a multi-method approach has the potential of effectively differentiating actual defect indications from the many false alarms, thus enhancing detection reliability. In this study, we propose a new technique for aggregating scattered two- or three-dimensional sensory data. Using a density-based approach, the proposed method explicitly addresses localization uncertainties such as registration errors. This feature marks one of the major of advantages of this approach over pixel-based image fusion techniques. We provide guidelines on how to set all the key parameters and demonstrate the technique’s robustness. Finally, we apply our fusion approach to experimental data and demonstrate its capability to locate small defects by substantially reducing false alarms under conditions where no single-sensor method is adequate. PMID:26784200

  11. New exact solutions for a discrete electrical lattice using the analytical methods

    NASA Astrophysics Data System (ADS)

    Manafian, Jalil; Lakestani, Mehrdad

    2018-03-01

    This paper retrieves soliton solutions to an equation in nonlinear electrical transmission lines using the semi-inverse variational principle method (SIVPM), the \\exp(-Ω(ξ)) -expansion method (EEM) and the improved tan(φ/2) -expansion method (ITEM), with the aid of the symbolic computation package Maple. As a result, the SIVPM, EEM and ITEM methods are successfully employed and some new exact solitary wave solutions are acquired in terms of kink-singular soliton solution, hyperbolic solution, trigonometric solution, dark and bright soliton solutions. All solutions have been verified back into their corresponding equations with the aid of the Maple package program. We depicted the physical explanation of the extracted solutions with the choice of different parameters by plotting some 2D and 3D illustrations. Finally, we show that the used methods are robust and more efficient than other methods. More importantly, the solutions found in this work can have significant applications in telecommunication systems where solitons are used to codify data.

  12. A low-cost, tunable laser lock without laser frequency modulation

    NASA Astrophysics Data System (ADS)

    Shea, Margaret E.; Baker, Paul M.; Gauthier, Daniel J.

    2015-05-01

    Many experiments in optical physics require laser frequency stabilization. This can be achieved by locking to an atomic reference using saturated absorption spectroscopy. Often, the laser frequency is modulated and phase sensitive detection used. This method, while well-proven and robust, relies on expensive components, can introduce an undesirable frequency modulation into the laser, and is not easily frequency tuned. Here, we report a simple locking scheme similar to those implemented previously. We modulate the atomic resonances in a saturated absorption setup with an AC magnetic field created by a single solenoid. The same coil applies a DC field that allows tuning of the lock point. We use an auto-balanced detector to make our scheme more robust against laser power fluctuations and stray magnetic fields. The coil, its driver, and the detector are home-built with simple, cheap components. Our technique is low-cost, simple to setup, tunable, introduces no laser frequency modulation, and only requires one laser. We gratefully acknowledge the financial support of the NSF through Grant # PHY-1206040.

  13. Robust and High Order Computational Method for Parachute and Air Delivery and MAV System

    DTIC Science & Technology

    2017-11-01

    Report: Robust and High Order Computational Method for Parachute and Air Delivery and MAV System The views, opinions and/or findings contained in this...University Title: Robust and High Order Computational Method for Parachute and Air Delivery and MAV System Report Term: 0-Other Email: xiaolin.li...model coupled with an incompressible fluid solver through the impulse method . Our approach to simulating the parachute system is based on the front

  14. Probabilistic BPRRC: Robust Change Detection against Illumination Changes and Background Movements

    NASA Astrophysics Data System (ADS)

    Yokoi, Kentaro

    This paper presents Probabilistic Bi-polar Radial Reach Correlation (PrBPRRC), a change detection method that is robust against illumination changes and background movements. Most of the traditional change detection methods are robust against either illumination changes or background movements; BPRRC is one of the illumination-robust change detection methods. We introduce a probabilistic background texture model into BPRRC and add the robustness against background movements including foreground invasions such as moving cars, walking people, swaying trees, and falling snow. We show the superiority of PrBPRRC in the environment with illumination changes and background movements by using three public datasets and one private dataset: ATON Highway data, Karlsruhe traffic sequence data, PETS 2007 data, and Walking-in-a-room data.

  15. Using large hydrological datasets to create a robust, physically based, spatially distributed model for Great Britain

    NASA Astrophysics Data System (ADS)

    Lewis, Elizabeth; Kilsby, Chris; Fowler, Hayley

    2014-05-01

    The impact of climate change on hydrological systems requires further quantification in order to inform water management. This study intends to conduct such analysis using hydrological models. Such models are of varying forms, of which conceptual, lumped parameter models and physically-based models are two important types. The majority of hydrological studies use conceptual models calibrated against measured river flow time series in order to represent catchment behaviour. This method often shows impressive results for specific problems in gauged catchments. However, the results may not be robust under non-stationary conditions such as climate change, as physical processes and relationships amenable to change are not accounted for explicitly. Moreover, conceptual models are less readily applicable to ungauged catchments, in which hydrological predictions are also required. As such, the physically based, spatially distributed model SHETRAN is used in this study to develop a robust and reliable framework for modelling historic and future behaviour of gauged and ungauged catchments across the whole of Great Britain. In order to achieve this, a large array of data completely covering Great Britain for the period 1960-2006 has been collated and efficiently stored ready for model input. The data processed include a DEM, rainfall, PE and maps of geology, soil and land cover. A desire to make the modelling system easy for others to work with led to the development of a user-friendly graphical interface. This allows non-experts to set up and run a catchment model in a few seconds, a process that can normally take weeks or months. The quality and reliability of the extensive dataset for modelling hydrological processes has also been evaluated. One aspect of this has been an assessment of error and uncertainty in rainfall input data, as well as the effects of temporal resolution in precipitation inputs on model calibration. SHETRAN has been updated to accept gridded rainfall inputs, and UKCP09 gridded daily rainfall data has been disaggregated using hourly records to analyse the implications of using realistic sub-daily variability. Furthermore, the development of a comprehensive dataset and computationally efficient means of setting up and running catchment models has allowed for examination of how a robust parameter scheme may be derived. This analysis has been based on collective parameterisation of multiple catchments in contrasting hydrological settings and subject to varied processes. 350 gauged catchments all over the UK have been simulated, and a robust set of parameters is being sought by examining the full range of hydrological processes and calibrating to a highly diverse flow data series. The modelling system will be used to generate flow time series based on historical input data and also downscaled Regional Climate Model (RCM) forecasts using the UKCP09 Weather Generator. This will allow for analysis of flow frequency and associated future changes, which cannot be determined from the instrumental record or from lumped parameter model outputs calibrated only to historical catchment behaviour. This work will be based on the existing and functional modelling system described following some further improvements to calibration, particularly regarding simulation of groundwater-dominated catchments.

  16. Earthquake source tensor inversion with the gCAP method and 3D Green's functions

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Ben-Zion, Y.; Zhu, L.; Ross, Z.

    2013-12-01

    We develop and apply a method to invert earthquake seismograms for source properties using a general tensor representation and 3D Green's functions. The method employs (i) a general representation of earthquake potency/moment tensors with double couple (DC), compensated linear vector dipole (CLVD), and isotropic (ISO) components, and (ii) a corresponding generalized CAP (gCap) scheme where the continuous wave trains are broken into Pnl and surface waves (Zhu & Ben-Zion, 2013). For comparison, we also use the waveform inversion method of Zheng & Chen (2012) and Ammon et al. (1998). Sets of 3D Green's functions are calculated on a grid of 1 km3 using the 3-D community velocity model CVM-4 (Kohler et al. 2003). A bootstrap technique is adopted to establish robustness of the inversion results using the gCap method (Ross & Ben-Zion, 2013). Synthetic tests with 1-D and 3-D waveform calculations show that the source tensor inversion procedure is reasonably reliable and robust. As initial application, the method is used to investigate source properties of the March 11, 2013, Mw=4.7 earthquake on the San Jacinto fault using recordings of ~45 stations up to ~0.2Hz. Both the best fitting and most probable solutions include ISO component of ~1% and CLVD component of ~0%. The obtained ISO component, while small, is found to be a non-negligible positive value that can have significant implications for the physics of the failure process. Work on using higher frequency data for this and other earthquakes is in progress.

  17. Nonlinear Kalman filters for calibration in radio interferometry

    NASA Astrophysics Data System (ADS)

    Tasse, C.

    2014-06-01

    The data produced by the new generation of interferometers are affected by a wide variety of partially unknown complex effects such as pointing errors, phased array beams, ionosphere, troposphere, Faraday rotation, or clock drifts. Most algorithms addressing direction-dependent calibration solve for the effective Jones matrices, and cannot constrain the underlying physical quantities of the radio interferometry measurement equation (RIME). A related difficulty is that they lack robustness in the presence of low signal-to-noise ratios, and when solving for moderate to large numbers of parameters they can be subject to ill-conditioning. These effects can have dramatic consequences in the image plane such as source or even thermal noise suppression. The advantage of solvers directly estimating the physical terms appearing in the RIME is that they can potentially reduce the number of free parameters by orders of magnitudes while dramatically increasing the size of usable data, thereby improving conditioning. We present here a new calibration scheme based on a nonlinear version of the Kalman filter that aims at estimating the physical terms appearing in the RIME. We enrich the filter's structure with a tunable data representation model, together with an augmented measurement model for regularization. Using simulations we show that it can properly estimate the physical effects appearing in the RIME. We found that this approach is particularly useful in the most extreme cases such as when ionospheric and clock effects are simultaneously present. Combined with the ability to provide prior knowledge on the expected structure of the physical instrumental effects (expected physical state and dynamics), we obtain a fairly computationally cheap algorithm that we believe to be robust, especially in low signal-to-noise regimes. Potentially, the use of filters and other similar methods can represent an improvement for calibration in radio interferometry, under the condition that the effects corrupting visibilities are understood and analytically stable. Recursive algorithms are particularly well adapted for pre-calibration and sky model estimate in a streaming way. This may be useful for the SKA-type instruments that produce huge amounts of data that have to be calibrated before being averaged.

  18. Robust real-time extraction of respiratory signals from PET list-mode data.

    PubMed

    Salomon, Andre; Zhang, Bin; Olivier, Patrick; Goedicke, Andreas

    2018-05-01

    Respiratory motion, which typically cannot simply be suspended during PET image acquisition, affects lesions' detection and quantitative accuracy inside or in close vicinity to the lungs. Some motion compensation techniques address this issue via pre-sorting ("binning") of the acquired PET data into a set of temporal gates, where each gate is assumed to be minimally affected by respiratory motion. Tracking respiratory motion is typically realized using dedicated hardware (e.g. using respiratory belts and digital cameras). Extracting respiratory signalsdirectly from the acquired PET data simplifies the clinical workflow as it avoids to handle additional signal measurement equipment. We introduce a new data-driven method "Combined Local Motion Detection" (CLMD). It uses the Time-of-Flight (TOF) information provided by state-of-the-art PET scanners in order to enable real-time respiratory signal extraction without additional hardware resources. CLMD applies center-of-mass detection in overlapping regions based on simple back-positioned TOF event sets acquired in short time frames. Following a signal filtering and quality-based pre-selection step, the remaining extracted individual position information over time is then combined to generate a global respiratory signal. The method is evaluated using 7 measured FDG studies from single and multiple scan positions of the thorax region, and it is compared to other software-based methods regarding quantitative accuracy and statistical noise stability. Correlation coefficients around 90% between the reference and the extracted signal have been found for those PET scans where motion affected features such as tumors or hot regions were present in the PET field-of-view. For PET scans with a quarter of typically applied radiotracer doses, the CLMD method still provides similar high correlation coefficients which indicates its robustness to noise. Each CLMD processing needed less than 0.4s in total on a standard multi-core CPU and thus provides a robust and accurate approach enabling real-time processing capabilities using standard PC hardware. © 2018 Institute of Physics and Engineering in Medicine.

  19. A fast method to produce strong NFC films as a platform for barrier and functional materials.

    PubMed

    Osterberg, Monika; Vartiainen, Jari; Lucenius, Jessica; Hippi, Ulla; Seppälä, Jukka; Serimaa, Ritva; Laine, Janne

    2013-06-12

    In this study, we present a rapid method to prepare robust, solvent-resistant, nanofibrillated cellulose (NFC) films that can be further surface-modified for functionality. The oxygen, water vapor, and grease barrier properties of the films were measured, and in addition, mechanical properties in the dry and wet state and solvent resistance were evaluated. The pure unmodified NFC films were good barriers for oxygen gas and grease. At a relative humidity below 65%, oxygen permeability of the pure and unmodified NFC films was below 0.6 cm(3) μm m(-2) d(-1) kPa(-1), and no grease penetrated the film. However, the largest advantage of these films was their resistance to various solvents, such as water, methanol, toluene, and dimethylacetamide. Although they absorbed a substantial amount of solvent, the films could still be handled after 24 h of solvent soaking. Hot-pressing was introduced as a convenient method to not only increase the drying speed of the films but also enhance the robustness of the films. The wet strength of the films increased due to the pressing. Thus, they can be chemically or physically modified through adsorption or direct chemical reaction in both aqueous and organic solvents. Through these modifications, the properties of the film can be enhanced, introducing, for example, functionality, hydrophobicity, or bioactivity. Herein, a simple method using surface coating with wax to improve hydrophobicity and oxygen barrier properties at very high humidity is described. Through this modification, the oxygen permeability decreased further and was below 17 cm(3) μm m(-2) d(-1) kPa(-1) even at 97.4% RH, and the water vapor transmission rate decreased from 600 to 40 g/m(2) day. The wax treatment did not deteriorate the dry strength of the film. Possible reasons for the unique properties are discussed. The developed robust NFC films can be used as a generic, environmentally sustainable platform for functional materials.

  20. Tuning Monotonic Basin Hopping: Improving the Efficiency of Stochastic Search as Applied to Low-Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Englander, Jacob; Englander, Arnold

    2014-01-01

    Trajectory optimization methods using MBH have become well developed during the past decade. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing RVs from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by Englander significantly improves MBH performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness, where efficiency is finding better solutions in less time, and robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive RWs originally developed in the field of statistical physics.

  1. Structured Uncertainty Bound Determination From Data for Control and Performance Validation

    NASA Technical Reports Server (NTRS)

    Lim, Kyong B.

    2003-01-01

    This report attempts to document the broad scope of issues that must be satisfactorily resolved before one can expect to methodically obtain, with a reasonable confidence, a near-optimal robust closed loop performance in physical applications. These include elements of signal processing, noise identification, system identification, model validation, and uncertainty modeling. Based on a recently developed methodology involving a parameterization of all model validating uncertainty sets for a given linear fractional transformation (LFT) structure and noise allowance, a new software, Uncertainty Bound Identification (UBID) toolbox, which conveniently executes model validation tests and determine uncertainty bounds from data, has been designed and is currently available. This toolbox also serves to benchmark the current state-of-the-art in uncertainty bound determination and in turn facilitate benchmarking of robust control technology. To help clarify the methodology and use of the new software, two tutorial examples are provided. The first involves the uncertainty characterization of a flexible structure dynamics, and the second example involves a closed loop performance validation of a ducted fan based on an uncertainty bound from data. These examples, along with other simulation and experimental results, also help describe the many factors and assumptions that determine the degree of success in applying robust control theory to practical problems.

  2. Robust Optimization Design for Turbine Blade-Tip Radial Running Clearance using Hierarchically Response Surface Method

    NASA Astrophysics Data System (ADS)

    Zhiying, Chen; Ping, Zhou

    2017-11-01

    Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.

  3. Grid generation and adaptation via Monge-Kantorovich optimization in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Delzanno, Gian Luca; Chacon, Luis; Finn, John M.

    2008-11-01

    In a recent paper [1], Monge-Kantorovich (MK) optimization was proposed as a method of grid generation/adaptation in two dimensions (2D). The method is based on the minimization of the L2 norm of grid point displacement, constrained to producing a given positive-definite cell volume distribution (equidistribution constraint). The procedure gives rise to the Monge-Amp'ere (MA) equation: a single, non-linear scalar equation with no free-parameters. The MA equation was solved in Ref. [1] with the Jacobian Free Newton-Krylov technique and several challenging test cases were presented in squared domains in 2D. Here, we extend the work of Ref. [1]. We first formulate the MK approach in physical domains with curved boundary elements and in 3D. We then show the results of applying it to these more general cases. We show that MK optimization produces optimal grids in which the constraint is satisfied numerically to truncation error. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, submitted to Journal of Computational Physics (2008).

  4. Level set immersed boundary method for gas-liquid-solid interactions with phase-change

    NASA Astrophysics Data System (ADS)

    Dhruv, Akash; Balaras, Elias; Riaz, Amir; Kim, Jungho

    2017-11-01

    We will discuss an approach to simulate the interaction between two-phase flows with phase changes and stationary/moving structures. In our formulation, the Navier-Stokes and heat advection-diffusion equations are solved on a block-structured grid using adaptive mesh refinement (AMR) along with sharp jump in pressure, velocity and temperature across the interface separating the different phases. The jumps are implemented using a modified Ghost Fluid Method (Lee et al., J. Comput. Physics, 344:381-418, 2017), and the interface is tracked with a level set approach. Phase transition is achieved by calculating mass flux near the interface and extrapolating it to the rest of the domain using a Hamilton-Jacobi equation. Stationary/moving structures are simulated with an immersed boundary formulation based on moving least squares (Vanella & Balaras, J. Comput. Physics, 228:6617-6628, 2009). A variety of canonical problems involving vaporization, film boiling and nucleate boiling is presented to validate the method and demonstrate the its formal accuracy. The robustness of the solver in complex problems, which are crucial in efficient design of heat transfer mechanisms for various applications, will also be demonstrated. Work supported by NASA, Grant NNX16AQ77G.

  5. Robust path planning for flexible needle insertion using Markov decision processes.

    PubMed

    Tan, Xiaoyu; Yu, Pengqian; Lim, Kah-Bin; Chui, Chee-Kong

    2018-05-11

    Flexible needle has the potential to accurately navigate to a treatment region in the least invasive manner. We propose a new planning method using Markov decision processes (MDPs) for flexible needle navigation that can perform robust path planning and steering under the circumstance of complex tissue-needle interactions. This method enhances the robustness of flexible needle steering from three different perspectives. First, the method considers the problem caused by soft tissue deformation. The method then resolves the common needle penetration failure caused by patterns of targets, while the last solution addresses the uncertainty issues in flexible needle motion due to complex and unpredictable tissue-needle interaction. Computer simulation and phantom experimental results show that the proposed method can perform robust planning and generate a secure control policy for flexible needle steering. Compared with a traditional method using MDPs, the proposed method achieves higher accuracy and probability of success in avoiding obstacles under complicated and uncertain tissue-needle interactions. Future work will involve experiment with biological tissue in vivo. The proposed robust path planning method can securely steer flexible needle within soft phantom tissues and achieve high adaptability in computer simulation.

  6. A semi-Lagrangian transport method for kinetic problems with application to dense-to-dilute polydisperse reacting spray flows

    NASA Astrophysics Data System (ADS)

    Doisneau, François; Arienti, Marco; Oefelein, Joseph C.

    2017-01-01

    For sprays, as described by a kinetic disperse phase model strongly coupled to the Navier-Stokes equations, the resolution strategy is constrained by accuracy objectives, robustness needs, and the computing architecture. In order to leverage the good properties of the Eulerian formalism, we introduce a deterministic particle-based numerical method to solve transport in physical space, which is simple to adapt to the many types of closures and moment systems. The method is inspired by the semi-Lagrangian schemes, developed for Gas Dynamics. We show how semi-Lagrangian formulations are relevant for a disperse phase far from equilibrium and where the particle-particle coupling barely influences the transport; i.e., when particle pressure is negligible. The particle behavior is indeed close to free streaming. The new method uses the assumption of parcel transport and avoids to compute fluxes and their limiters, which makes it robust. It is a deterministic resolution method so that it does not require efforts on statistical convergence, noise control, or post-processing. All couplings are done among data under the form of Eulerian fields, which allows one to use efficient algorithms and to anticipate the computational load. This makes the method both accurate and efficient in the context of parallel computing. After a complete verification of the new transport method on various academic test cases, we demonstrate the overall strategy's ability to solve a strongly-coupled liquid jet with fine spatial resolution and we apply it to the case of high-fidelity Large Eddy Simulation of a dense spray flow. A fuel spray is simulated after atomization at Diesel engine combustion chamber conditions. The large, parallel, strongly coupled computation proves the efficiency of the method for dense, polydisperse, reacting spray flows.

  7. A semi-Lagrangian transport method for kinetic problems with application to dense-to-dilute polydisperse reacting spray flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doisneau, François, E-mail: fdoisne@sandia.gov; Arienti, Marco, E-mail: marient@sandia.gov; Oefelein, Joseph C., E-mail: oefelei@sandia.gov

    For sprays, as described by a kinetic disperse phase model strongly coupled to the Navier–Stokes equations, the resolution strategy is constrained by accuracy objectives, robustness needs, and the computing architecture. In order to leverage the good properties of the Eulerian formalism, we introduce a deterministic particle-based numerical method to solve transport in physical space, which is simple to adapt to the many types of closures and moment systems. The method is inspired by the semi-Lagrangian schemes, developed for Gas Dynamics. We show how semi-Lagrangian formulations are relevant for a disperse phase far from equilibrium and where the particle–particle coupling barelymore » influences the transport; i.e., when particle pressure is negligible. The particle behavior is indeed close to free streaming. The new method uses the assumption of parcel transport and avoids to compute fluxes and their limiters, which makes it robust. It is a deterministic resolution method so that it does not require efforts on statistical convergence, noise control, or post-processing. All couplings are done among data under the form of Eulerian fields, which allows one to use efficient algorithms and to anticipate the computational load. This makes the method both accurate and efficient in the context of parallel computing. After a complete verification of the new transport method on various academic test cases, we demonstrate the overall strategy's ability to solve a strongly-coupled liquid jet with fine spatial resolution and we apply it to the case of high-fidelity Large Eddy Simulation of a dense spray flow. A fuel spray is simulated after atomization at Diesel engine combustion chamber conditions. The large, parallel, strongly coupled computation proves the efficiency of the method for dense, polydisperse, reacting spray flows.« less

  8. Efficient Variable Selection Method for Exposure Variables on Binary Data

    NASA Astrophysics Data System (ADS)

    Ohno, Manabu; Tarumi, Tomoyuki

    In this paper, we propose a new variable selection method for "robust" exposure variables. We define "robust" as property that the same variable can select among original data and perturbed data. There are few studies of effective for the selection method. The problem that selects exposure variables is almost the same as a problem that extracts correlation rules without robustness. [Brin 97] is suggested that correlation rules are possible to extract efficiently using chi-squared statistic of contingency table having monotone property on binary data. But the chi-squared value does not have monotone property, so it's is easy to judge the method to be not independent with an increase in the dimension though the variable set is completely independent, and the method is not usable in variable selection for robust exposure variables. We assume anti-monotone property for independent variables to select robust independent variables and use the apriori algorithm for it. The apriori algorithm is one of the algorithms which find association rules from the market basket data. The algorithm use anti-monotone property on the support which is defined by association rules. But independent property does not completely have anti-monotone property on the AIC of independent probability model, but the tendency to have anti-monotone property is strong. Therefore, selected variables with anti-monotone property on the AIC have robustness. Our method judges whether a certain variable is exposure variable for the independent variable using previous comparison of the AIC. Our numerical experiments show that our method can select robust exposure variables efficiently and precisely.

  9. Multi-time Scale Joint Scheduling Method Considering the Grid of Renewable Energy

    NASA Astrophysics Data System (ADS)

    Zhijun, E.; Wang, Weichen; Cao, Jin; Wang, Xin; Kong, Xiangyu; Quan, Shuping

    2018-01-01

    Renewable new energy power generation prediction error like wind and light, brings difficulties to dispatch the power system. In this paper, a multi-time scale robust scheduling method is set to solve this problem. It reduces the impact of clean energy prediction bias to the power grid by using multi-time scale (day-ahead, intraday, real time) and coordinating the dispatching power output of various power supplies such as hydropower, thermal power, wind power, gas power and. The method adopts the robust scheduling method to ensure the robustness of the scheduling scheme. By calculating the cost of the abandon wind and the load, it transforms the robustness into the risk cost and optimizes the optimal uncertainty set for the smallest integrative costs. The validity of the method is verified by simulation.

  10. Online two-stage association method for robust multiple people tracking

    NASA Astrophysics Data System (ADS)

    Lv, Jingqin; Fang, Jiangxiong; Yang, Jie

    2011-07-01

    Robust multiple people tracking is very important for many applications. It is a challenging problem due to occlusion and interaction in crowded scenarios. This paper proposes an online two-stage association method for robust multiple people tracking. In the first stage, short tracklets generated by linking people detection responses grow longer by particle filter based tracking, with detection confidence embedded into the observation model. And, an examining scheme runs at each frame for the reliability of tracking. In the second stage, multiple people tracking is achieved by linking tracklets to generate trajectories. An online tracklet association method is proposed to solve the linking problem, which allows applications in time-critical scenarios. This method is evaluated on the popular CAVIAR dataset. The experimental results show that our two-stage method is robust.

  11. Holistic metrology qualification extension and its application to characterize overlay targets with asymmetric effects

    NASA Astrophysics Data System (ADS)

    Dos Santos Ferreira, Olavio; Sadat Gousheh, Reza; Visser, Bart; Lie, Kenrick; Teuwen, Rachel; Izikson, Pavel; Grzela, Grzegorz; Mokaberi, Babak; Zhou, Steve; Smith, Justin; Husain, Danish; Mandoy, Ram S.; Olvera, Raul

    2018-03-01

    Ever increasing need for tighter on-product overlay (OPO), as well as enhanced accuracy in overlay metrology and methodology, is driving semiconductor industry's technologists to innovate new approaches to OPO measurements. In case of High Volume Manufacturing (HVM) fabs, it is often critical to strive for both accuracy and robustness. Robustness, in particular, can be challenging in metrology since overlay targets can be impacted by proximity of other structures next to the overlay target (asymmetric effects), as well as symmetric stack changes such as photoresist height variations. Both symmetric and asymmetric contributors have impact on robustness. Furthermore, tweaking or optimizing wafer processing parameters for maximum yield may have an adverse effect on physical target integrity. As a result, measuring and monitoring physical changes or process abnormalities/artefacts in terms of new Key Performance Indicators (KPIs) is crucial for the end goal of minimizing true in-die overlay of the integrated circuits (ICs). IC manufacturing fabs often relied on CD-SEM in the past to capture true in-die overlay. Due to destructive and intrusive nature of CD-SEMs on certain materials, it's desirable to characterize asymmetry effects for overlay targets via inline KPIs utilizing YieldStar (YS) metrology tools. These KPIs can also be integrated as part of (μDBO) target evaluation and selection for final recipe flow. In this publication, the Holistic Metrology Qualification (HMQ) flow was extended to account for process induced (asymmetric) effects such as Grating Imbalance (GI) and Bottom Grating Asymmetry (BGA). Local GI typically contributes to the intrafield OPO whereas BGA typically impacts the interfield OPO, predominantly at the wafer edge. Stack height variations highly impact overlay metrology accuracy, in particular in case of multi-layer LithoEtch Litho-Etch (LELE) overlay control scheme. Introducing a GI impact on overlay (in nm) KPI check quantifies the grating imbalance impact on overlay, whereas optimizing for accuracy using self-reference captures the bottom grating asymmetry effect. Measuring BGA after each process step before exposure of the top grating helps to identify which specific step introduces the asymmetry in the bottom grating. By evaluating this set of KPI's to a BEOL LELE overlay scheme, we can enhance robustness of recipe selection and target selection. Furthermore, these KPIs can be utilized to highlight process and equipment abnormalities. In this work, we also quantified OPO results with a self-contained methodology called Triangle Method. This method can be utilized for LELE layers with a common target and reference. This allows validating general μDBO accuracy, hence reducing the need for CD-SEM verification.

  12. The 32nd CDC: Robust stabilizer synthesis for interval plants using Nevanlina-pick theory

    NASA Technical Reports Server (NTRS)

    Bhattacharya, Saikat; Keel, L. H.; Bhattacharyya, S. P.

    1989-01-01

    The synthesis of robustly stabilizing compensators for interval plants, i.e., plants whose parameters vary within prescribed ranges is discussed. Well-known H(sup infinity) methods are used to establish robust stabilizability conditions for a family of plants and also to synthesize controllers that would stabilize the whole family. Though conservative, these methods give a very simple way to come up with a family of robust stabilizers for an interval plant.

  13. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  14. Motor potential profile and a robust method for extracting it from time series of motor positions.

    PubMed

    Wang, Hongyun

    2006-10-21

    Molecular motors are small, and, as a result, motor operation is dominated by high-viscous friction and large thermal fluctuations from the surrounding fluid environment. The small size has hindered, in many ways, the studies of physical mechanisms of molecular motors. For a macroscopic motor, it is possible to observe/record experimentally the internal operation details of the motor. This is not yet possible for molecular motors. The chemical reaction in a molecular motor has many occupancy states, each having a different effect on the motor motion. The overall effect of the chemical reaction on the motor motion can be characterized by the motor potential profile. The potential profile reveals how the motor force changes with position in a motor step, which may lead to insights into how the chemical reaction is coupled to force generation. In this article, we propose a mathematical formulation and a robust method for constructing motor potential profiles from time series of motor positions measured in single molecule experiments. Numerical examples based on simulated data are shown to demonstrate the method. Interestingly, it is the small size of molecular motors (negligible inertia) that makes it possible to recover the potential profile from time series of motor positions. For a macroscopic motor, the variation of driving force within a cycle is smoothed out by the large inertia.

  15. Decreased external skeletal robustness in schoolchildren--a global trend? Ten year comparison of Russian and German data.

    PubMed

    Rietsch, Katrin; Godina, Elena; Scheffler, Christiane

    2013-01-01

    Obesity and a reduced physical activity are global developments. Physical activity affects the external skeletal robustness which decreased in German children. It was assumed that the negative trend of decreased external skeletal robustness can be found in other countries. Therefore anthropometric data of Russian and German children from the years 2000 and 2010 were compared. Russian (2000/2010 n = 1023/268) and German (2000/2010 n = 2103/1750) children aged 6-10 years were investigated. Height, BMI and external skeletal robustness (Frame-Index) were examined and compared for the years and the countries. Statistical analysis was performed by Mann-Whitney-Test. Comparison 2010 and 2000: In Russian children BMI was significantly higher; boys were significantly taller and exhibited a decreased Frame-Index (p = .002) in 2010. German boys showed significantly higher BMI in 2010. In both sexes Frame-Index (p = .001) was reduced in 2010. Comparison Russian and German children in 2000: BMI, height and Frame-Index were different between Russian and German children. German children were significantly taller but exhibited a lower Frame-Index (p<.001). Even German girls showed a significantly higher BMI. Comparison Russian and German children in 2010: BMI and Frame-Index were different. Russian children displayed a higher Frame-Index (p<.001) compared with Germans. In Russian children BMI has increased in recent years. Frame-Index is still higher in Russian children compared with Germans however in Russian boys Frame-Index is reduced. This trend and the physical activity should be observed in the future.

  16. A finite-element toolbox for the stationary Gross-Pitaevskii equation with rotation

    NASA Astrophysics Data System (ADS)

    Vergez, Guillaume; Danaila, Ionut; Auliac, Sylvain; Hecht, Frédéric

    2016-12-01

    We present a new numerical system using classical finite elements with mesh adaptivity for computing stationary solutions of the Gross-Pitaevskii equation. The programs are written as a toolbox for FreeFem++ (www.freefem.org), a free finite-element software available for all existing operating systems. This offers the advantage to hide all technical issues related to the implementation of the finite element method, allowing to easily code various numerical algorithms. Two robust and optimized numerical methods were implemented to minimize the Gross-Pitaevskii energy: a steepest descent method based on Sobolev gradients and a minimization algorithm based on the state-of-the-art optimization library Ipopt. For both methods, mesh adaptivity strategies are used to reduce the computational time and increase the local spatial accuracy when vortices are present. Different run cases are made available for 2D and 3D configurations of Bose-Einstein condensates in rotation. An optional graphical user interface is also provided, allowing to easily run predefined cases or with user-defined parameter files. We also provide several post-processing tools (like the identification of quantized vortices) that could help in extracting physical features from the simulations. The toolbox is extremely versatile and can be easily adapted to deal with different physical models.

  17. The physical basis and future of radiation therapy.

    PubMed

    Bortfeld, T; Jeraj, R

    2011-06-01

    The remarkable progress in radiation therapy over the last century has been largely due to our ability to more effectively focus and deliver radiation to the tumour target volume. Physics discoveries and technology inventions have been an important driving force behind this progress. However, there is still plenty of room left for future improvements through physics, for example image guidance and four-dimensional motion management and particle therapy, as well as increased efficiency of more compact and cheaper technologies. Bigger challenges lie ahead of physicists in radiation therapy beyond the dose localisation problem, for example in the areas of biological target definition, improved modelling for normal tissues and tumours, advanced multicriteria and robust optimisation, and continuous incorporation of advanced technologies such as molecular imaging. The success of physics in radiation therapy has been based on the continued "fuelling" of the field with new discoveries and inventions from physics research. A key to the success has been the application of the rigorous scientific method. In spite of the importance of physics research for radiation therapy, too few physicists are currently involved in cutting-edge research. The increased emphasis on more "professionalism" in medical physics will tip the situation even more off balance. To prevent this from happening, we argue that medical physics needs more research positions, and more and better academic programmes. Only with more emphasis on medical physics research will the future of radiation therapy and other physics-related medical specialties look as bright as the past, and medical physics will maintain a status as one of the most exciting fields of applied physics.

  18. The physical basis and future of radiation therapy

    PubMed Central

    Bortfeld, T; Jeraj, R

    2011-01-01

    The remarkable progress in radiation therapy over the last century has been largely due to our ability to more effectively focus and deliver radiation to the tumour target volume. Physics discoveries and technology inventions have been an important driving force behind this progress. However, there is still plenty of room left for future improvements through physics, for example image guidance and four-dimensional motion management and particle therapy, as well as increased efficiency of more compact and cheaper technologies. Bigger challenges lie ahead of physicists in radiation therapy beyond the dose localisation problem, for example in the areas of biological target definition, improved modelling for normal tissues and tumours, advanced multicriteria and robust optimisation, and continuous incorporation of advanced technologies such as molecular imaging. The success of physics in radiation therapy has been based on the continued “fuelling” of the field with new discoveries and inventions from physics research. A key to the success has been the application of the rigorous scientific method. In spite of the importance of physics research for radiation therapy, too few physicists are currently involved in cutting-edge research. The increased emphasis on more “professionalism” in medical physics will tip the situation even more off balance. To prevent this from happening, we argue that medical physics needs more research positions, and more and better academic programmes. Only with more emphasis on medical physics research will the future of radiation therapy and other physics-related medical specialties look as bright as the past, and medical physics will maintain a status as one of the most exciting fields of applied physics. PMID:21606068

  19. Racial and Gender Discrimination, Early Life Factors, and Chronic Physical Health Conditions in Midlife

    PubMed Central

    McDonald, Jasmine A.; Terry, Mary Beth; Tehranifar, Parisa

    2013-01-01

    Purpose Most studies of perceived discrimination have been cross-sectional and focused primarily on mental rather than physical health conditions. We examined the associations of perceived racial and gender discrimination reported in adulthood with early life factors and self-reported physician-diagnosis of chronic physical health conditions. Methods We used data from a racially diverse birth cohort of U.S. women (N=168, average age=41 years) with prospectively collected early life data (e.g., parental socioeconomic factors) and adult reported data on perceived discrimination, physical health conditions, and relevant risk factors. We performed modified robust Poisson regression due to the high prevalence of the outcomes. Results Fifty-percent of participants reported racial and 39% reported gender discrimination. Early life factors did not have strong associations with perceived discrimination. In adjusted regression models, participants reporting at least three experiences of gender or racial discrimination had a 38% increased risk of having at least one physical health conditions (RR=1.38, 95% CI: 1.01-1.87). Using standardized regression coefficients, the magnitude of the association of having physical health conditions was larger for perceived discrimination than for being overweight or obese. Conclusion Our results suggest a substantial chronic disease burden associated with perceived discrimination, which may exceed the impact of established risk factors for poor physical health. PMID:24345610

  20. Use of machine learning methods to reduce predictive error of groundwater models.

    PubMed

    Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal

    2014-01-01

    Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.

  1. Fabrication of elastomeric silk fibers.

    PubMed

    Bradner, Sarah A; Partlow, Benjamin P; Cebe, Peggy; Omenetto, Fiorenzo G; Kaplan, David L

    2017-09-01

    Methods to generate fibers from hydrogels, with control over mechanical properties, fiber diameter, and crystallinity, while retaining cytocompatibility and degradability, would expand options for biomaterials. Here, we exploited features of silk fibroin protein for the formation of tunable silk hydrogel fibers. The biological, chemical, and morphological features inherent to silk were combined with elastomeric properties gained through enzymatic crosslinking of the protein. Postprocessing via methanol and autoclaving provided tunable control of fiber features. Mechanical, optical, and chemical analyses demonstrated control of fiber properties by exploiting the physical cross-links, and generating double network hydrogels consisting of chemical and physical cross-links. Structure and chemical analyses revealed crystallinity from 30 to 50%, modulus from 0.5 to 4 MPa, and ultimate strength 1-5 MPa depending on the processing method. Fabrication and postprocessing combined provided fibers with extensibility from 100 to 400% ultimate strain. Fibers strained to 100% exhibited fourth order birefringence, revealing macroscopic orientation driven by chain mobility. The physical cross-links were influenced in part by the drying rate of fabricated materials, where bound water, packing density, and microstructural homogeneity influenced cross-linking efficiency. The ability to generate robust and versatile hydrogel microfibers is desirable for bottom-up assembly of biological tissues and for broader biomaterial applications. © 2017 Wiley Periodicals, Inc.

  2. Speed Biases With Real-Life Video Clips

    PubMed Central

    Rossi, Federica; Montanaro, Elisa; de’Sperati, Claudio

    2018-01-01

    We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate “natural” video compression techniques based on sub-threshold temporal squeezing. PMID:29615875

  3. Speed Biases With Real-Life Video Clips.

    PubMed

    Rossi, Federica; Montanaro, Elisa; de'Sperati, Claudio

    2018-01-01

    We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate "natural" video compression techniques based on sub-threshold temporal squeezing.

  4. Aerodynamic design applying automatic differentiation and using robust variable fidelity optimization

    NASA Astrophysics Data System (ADS)

    Takemiya, Tetsushi

    In modern aerospace engineering, the physics-based computational design method is becoming more important, as it is more efficient than experiments and because it is more suitable in designing new types of aircraft (e.g., unmanned aerial vehicles or supersonic business jets) than the conventional design method, which heavily relies on historical data. To enhance the reliability of the physics-based computational design method, researchers have made tremendous efforts to improve the fidelity of models. However, high-fidelity models require longer computational time, so the advantage of efficiency is partially lost. This problem has been overcome with the development of variable fidelity optimization (VFO). In VFO, different fidelity models are simultaneously employed in order to improve the speed and the accuracy of convergence in an optimization process. Among the various types of VFO methods, one of the most promising methods is the approximation management framework (AMF). In the AMF, objective and constraint functions of a low-fidelity model are scaled at a design point so that the scaled functions, which are referred to as "surrogate functions," match those of a high-fidelity model. Since scaling functions and the low-fidelity model constitutes surrogate functions, evaluating the surrogate functions is faster than evaluating the high-fidelity model. Therefore, in the optimization process, in which gradient-based optimization is implemented and thus many function calls are required, the surrogate functions are used instead of the high-fidelity model to obtain a new design point. The best feature of the AMF is that it may converge to a local optimum of the high-fidelity model in much less computational time than the high-fidelity model. However, through literature surveys and implementations of the AMF, the author xx found that (1) the AMF is very vulnerable when the computational analysis models have numerical noise, which is very common in high-fidelity models, and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite differentiation (FD) method, and then, the Robust AMF is implemented along with the sequential quadratic programming (SQP) optimization method with only high-fidelity models. The proposed AD method computes derivatives more accurately and faster than the FD method, and the Robust AMF successfully optimizes shapes of the airfoil and the wing in a much shorter time than SQP with only high-fidelity models. These results clearly show the effectiveness of the Robust AMF. Finally, the feasibility of reducing computational time for calculating derivatives and the necessity of AMF with an optimum design point always in the feasible region are discussed as future work.

  5. Robust adaptive multichannel SAR processing based on covariance matrix reconstruction

    NASA Astrophysics Data System (ADS)

    Tan, Zhen-ya; He, Feng

    2018-04-01

    With the combination of digital beamforming (DBF) processing, multichannel synthetic aperture radar(SAR) systems in azimuth promise well in high-resolution and wide-swath imaging, whereas conventional processing methods don't take the nonuniformity of scattering coefficient into consideration. This paper brings up a robust adaptive Multichannel SAR processing method which utilizes the Capon spatial spectrum estimator to obtain the spatial spectrum distribution over all ambiguous directions first, and then the interference-plus-noise covariance Matrix is reconstructed based on definition to acquire the Multichannel SAR processing filter. The performance of processing under nonuniform scattering coefficient is promoted by this novel method and it is robust again array errors. The experiments with real measured data demonstrate the effectiveness and robustness of the proposed method.

  6. Low cost 3D-printing used in an undergraduate project: an integrating sphere for measurement of photoluminescence quantum yield

    NASA Astrophysics Data System (ADS)

    Tomes, John J.; Finlayson, Chris E.

    2016-09-01

    We report upon the exploitation of the latest 3D printing technologies to provide low-cost instrumentation solutions, for use in an undergraduate level final-year project. The project addresses prescient research issues in optoelectronics, which would otherwise be inaccessible to such undergraduate student projects. The experimental use of an integrating sphere in conjunction with a desktop spectrometer presents opportunities to use easily handled, low cost materials as a means to illustrate many areas of physics such as spectroscopy, lasers, optics, simple circuits, black body radiation and data gathering. Presented here is a 3rd year undergraduate physics project which developed a low cost (£25) method to manufacture an experimentally accurate integrating sphere by 3D printing. Details are given of both a homemade internal reflectance coating formulated from readily available materials, and a robust instrument calibration method using a tungsten bulb. The instrument is demonstrated to give accurate and reproducible experimental measurements of luminescence quantum yield of various semiconducting fluorophores, in excellent agreement with literature values.

  7. Error free physically unclonable function with programmed resistive random access memory using reliable resistance states by specific identification-generation method

    NASA Astrophysics Data System (ADS)

    Tseng, Po-Hao; Hsu, Kai-Chieh; Lin, Yu-Yu; Lee, Feng-Min; Lee, Ming-Hsiu; Lung, Hsiang-Lan; Hsieh, Kuang-Yeu; Chung Wang, Keh; Lu, Chih-Yuan

    2018-04-01

    A high performance physically unclonable function (PUF) implemented with WO3 resistive random access memory (ReRAM) is presented in this paper. This robust ReRAM-PUF can eliminated bit flipping problem at very high temperature (up to 250 °C) due to plentiful read margin by using initial resistance state and set resistance state. It is also promised 10 years retention at the temperature range of 210 °C. These two stable resistance states enable stable operation at automotive environments from -40 to 125 °C without need of temperature compensation circuit. The high uniqueness of PUF can be achieved by implementing a proposed identification (ID)-generation method. Optimized forming condition can move 50% of the cells to low resistance state and the remaining 50% remain at initial high resistance state. The inter- and intra-PUF evaluations with unlimited separation of hamming distance (HD) are successfully demonstrated even under the corner condition. The number of reproduction was measured to exceed 107 times with 0% bit error rate (BER) at read voltage from 0.4 to 0.7 V.

  8. Electromagnetic pulsed thermography for natural cracks inspection

    PubMed Central

    Gao, Yunlai; Tian, Gui Yun; Wang, Ping; Wang, Haitao; Gao, Bin; Woo, Wai Lok; Li, Kongjing

    2017-01-01

    Emerging integrated sensing and monitoring of material degradation and cracks are increasingly required for characterizing the structural integrity and safety of infrastructure. However, most conventional nondestructive evaluation (NDE) methods are based on single modality sensing which is not adequate to evaluate structural integrity and natural cracks. This paper proposed electromagnetic pulsed thermography for fast and comprehensive defect characterization. It hybrids multiple physical phenomena i.e. magnetic flux leakage, induced eddy current and induction heating linking to physics as well as signal processing algorithms to provide abundant information of material properties and defects. New features are proposed using 1st derivation that reflects multiphysics spatial and temporal behaviors to enhance the detection of cracks with different orientations. Promising results that robust to lift-off changes and invariant features for artificial and natural cracks detection have been demonstrated that the proposed method significantly improves defect detectability. It opens up multiphysics sensing and integrated NDE with potential impact for natural understanding and better quantitative evaluation of natural cracks including stress corrosion crack (SCC) and rolling contact fatigue (RCF). PMID:28169361

  9. Results From Spain's 2016 Report Card on Physical Activity for Children and Youth.

    PubMed

    Roman-Viñas, Blanca; Marin, Jorge; Sánchez-López, Mairena; Aznar, Susana; Leis, Rosaura; Aparicio-Ugarriza, Raquel; Schroder, Helmut; Ortiz-Moncada, Rocío; Vicente, German; González-Gross, Marcela; Serra-Majem, Lluís

    2016-11-01

    The first Active Healthy Kids Spanish Report Card aims to gather the most robust information about physical activity (PA) and sedentary behavior of children and adolescents. A Research Working Group of experts on PA and sport sciences was convened. A comprehensive data search, based on a review of the literature, dissertations, gray literature, and experts' nonpublished data, was conducted to identify the best sources to grade each indicator following the procedures and methodology outlined by the Active Healthy Kids Canada Report Card model. Overall PA (based on objective and self-reported methods) was graded as D-, Organized Sports Participation as B, Active Play as C+, Active Transportation as C, Sedentary Behavior as D, School as C, and Family and Peers as Incomplete, Community and the Built Environment as Incomplete, and Government as Incomplete. Spanish children and adolescents showed low levels of adherence to PA and sedentary behavior guidelines, especially females and adolescents. There is a need to achieve consensus and harmonize methods to evaluate PA and sedentary behavior to monitor changes over time and to evaluate the effectiveness of policies to promote PA.

  10. Prescribed Velocity Gradients for Highly Viscous SPH Fluids with Vorticity Diffusion.

    PubMed

    Peer, Andreas; Teschner, Matthias

    2017-12-01

    Working with prescribed velocity gradients is a promising approach to efficiently and robustly simulate highly viscous SPH fluids. Such approaches allow to explicitly and independently process shear rate, spin, and expansion rate. This can be used to, e.g., avoid interferences between pressure and viscosity solvers. Another interesting aspect is the possibility to explicitly process the vorticity, e.g., to preserve the vorticity. In this context, this paper proposes a novel variant of the prescribed-gradient idea that handles vorticity in a physically motivated way. In contrast to a less appropriate vorticity preservation that has been used in a previous approach, vorticity is diffused. The paper illustrates the utility of the vorticity diffusion. Therefore, comparisons of the proposed vorticity diffusion with vorticity preservation and additionally with vorticity damping are presented. The paper further discusses the relation between prescribed velocity gradients and prescribed velocity Laplacians which improves the intuition behind the prescribed-gradient method for highly viscous SPH fluids. Finally, the paper discusses the relation of the proposed method to a physically correct implicit viscosity formulation.

  11. Variable fidelity robust optimization of pulsed laser orbital debris removal under epistemic uncertainty

    NASA Astrophysics Data System (ADS)

    Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan

    2016-04-01

    A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.

  12. Robust and Imperceptible Watermarking of Video Streams for Low Power Devices

    NASA Astrophysics Data System (ADS)

    Ishtiaq, Muhammad; Jaffar, M. Arfan; Khan, Muhammad A.; Jan, Zahoor; Mirza, Anwar M.

    With the advent of internet, every aspect of life is going online. From online working to watching videos, everything is now available on the internet. With the greater business benefits, increased availability and other online business advantages, there is a major challenge of security and ownership of data. Videos downloaded from an online store can easily be shared among non-intended or unauthorized users. Invisible watermarking is used to hide copyright protection information in the videos. The existing methods of watermarking are less robust and imperceptible and also the computational complexity of these methods does not suit low power devices. In this paper, we have proposed a new method to address the problem of robustness and imperceptibility. Experiments have shown that our method has better robustness and imperceptibility as well as our method is computationally efficient than previous approaches in practice. Hence our method can easily be applied on low power devices.

  13. Process monitoring using automatic physical measurement based on electrical and physical variability analysis

    NASA Astrophysics Data System (ADS)

    Shauly, Eitan N.; Levi, Shimon; Schwarzband, Ishai; Adan, Ofer; Latinsky, Sergey

    2015-04-01

    A fully automated silicon-based methodology for systematic analysis of electrical features is shown. The system was developed for process monitoring and electrical variability reduction. A mapping step was created by dedicated structures such as static-random-access-memory (SRAM) array or standard cell library, or by using a simple design rule checking run-set. The resulting database was then used as an input for choosing locations for critical dimension scanning electron microscope images and for specific layout parameter extraction then was input to SPICE compact modeling simulation. Based on the experimental data, we identified two items that must be checked and monitored using the method described here: transistor's sensitivity to the distance between the poly end cap and edge of active area (AA) due to AA rounding, and SRAM leakage due to a too close N-well to P-well. Based on this example, for process monitoring and variability analyses, we extensively used this method to analyze transistor gates having different shapes. In addition, analysis for a large area of high density standard cell library was done. Another set of monitoring focused on a high density SRAM array is also presented. These examples provided information on the poly and AA layers, using transistor parameters such as leakage current and drive current. We successfully define "robust" and "less-robust" transistor configurations included in the library and identified unsymmetrical transistors in the SRAM bit-cells. These data were compared to data extracted from the same devices at the end of the line. Another set of analyses was done to samples after Cu M1 etch. Process monitoring information on M1 enclosed contact was extracted based on contact resistance as a feedback. Guidelines for the optimal M1 space for different layout configurations were also extracted. All these data showed the successful in-field implementation of our methodology as a useful process monitoring method.

  14. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    PubMed

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Structural Damage Detection Using Changes in Natural Frequencies: Theory and Applications

    NASA Astrophysics Data System (ADS)

    He, K.; Zhu, W. D.

    2011-07-01

    A vibration-based method that uses changes in natural frequencies of a structure to detect damage has advantages over conventional nondestructive tests in detecting various types of damage, including loosening of bolted joints, using minimum measurement data. Two major challenges associated with applications of the vibration-based damage detection method to engineering structures are addressed: accurate modeling of structures and the development of a robust inverse algorithm to detect damage, which are defined as the forward and inverse problems, respectively. To resolve the forward problem, new physics-based finite element modeling techniques are developed for fillets in thin-walled beams and for bolted joints, so that complex structures can be accurately modeled with a reasonable model size. To resolve the inverse problem, a logistical function transformation is introduced to convert the constrained optimization problem to an unconstrained one, and a robust iterative algorithm using a trust-region method, called the Levenberg-Marquardt method, is developed to accurately detect the locations and extent of damage. The new methodology can ensure global convergence of the iterative algorithm in solving under-determined system equations and deal with damage detection problems with relatively large modeling error and measurement noise. The vibration-based damage detection method is applied to various structures including lightning masts, a space frame structure and one of its components, and a pipeline. The exact locations and extent of damage can be detected in the numerical simulation where there is no modeling error and measurement noise. The locations and extent of damage can be successfully detected in experimental damage detection.

  16. Posture Detection Based on Smart Cushion for Wheelchair Users

    PubMed Central

    Ma, Congcong; Li, Wenfeng; Gravina, Raffaele; Fortino, Giancarlo

    2017-01-01

    The postures of wheelchair users can reveal their sitting habit, mood, and even predict health risks such as pressure ulcers or lower back pain. Mining the hidden information of the postures can reveal their wellness and general health conditions. In this paper, a cushion-based posture recognition system is used to process pressure sensor signals for the detection of user’s posture in the wheelchair. The proposed posture detection method is composed of three main steps: data level classification for posture detection, backward selection of sensor configuration, and recognition results compared with previous literature. Five supervised classification techniques—Decision Tree (J48), Support Vector Machines (SVM), Multilayer Perceptron (MLP), Naive Bayes, and k-Nearest Neighbor (k-NN)—are compared in terms of classification accuracy, precision, recall, and F-measure. Results indicate that the J48 classifier provides the highest accuracy compared to other techniques. The backward selection method was used to determine the best sensor deployment configuration of the wheelchair. Several kinds of pressure sensor deployments are compared and our new method of deployment is shown to better detect postures of the wheelchair users. Performance analysis also took into account the Body Mass Index (BMI), useful for evaluating the robustness of the method across individual physical differences. Results show that our proposed sensor deployment is effective, achieving 99.47% posture recognition accuracy. Our proposed method is very competitive for posture recognition and robust in comparison with other former research. Accurate posture detection represents a fundamental basic block to develop several applications, including fatigue estimation and activity level assessment. PMID:28353684

  17. Managing Complexity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian; Malard, Joel M.

    2004-08-01

    Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today’s most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically-based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This paper explores the state of the art in the use physical analogs for understanding the behavior of some econophysical systems and to deriving stable and robust controlmore » strategies for them. In particular we review and discussion applications of some analytic methods based on the thermodynamic metaphor according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood.« less

  18. Wavefront division digital holography

    NASA Astrophysics Data System (ADS)

    Zhang, Wenhui; Cao, Liangcai; Li, Rujia; Zhang, Hua; Zhang, Hao; Jiang, Qiang; Jin, Guofan

    2018-05-01

    Digital holography (DH), mostly Mach-Zehnder configuration based, belongs to non-common path amplitude splitting interference imaging whose stability and fringe contrast are environmental sensitive. This paper presents a wavefront division DH configuration with both high stability and high-contrast fringes benefitting from quasi common path wavefront-splitting interference. In our proposal, two spherical waves with similar curvature coming from the same wavefront are used, which makes full use of the physical sampling capacity of the detectors. The interference fringe spacing can be adjusted flexibly for both in-line and off-axis mode due to the independent modulation to these two waves. Only a few optical elements, including the mirror-beam splitter interference component, are used without strict alignments, which makes it robust and easy-to-implement. The proposed wavefront division DH promotes interference imaging physics into the practical and miniaturized a step forward. The feasibility of this method is proved by the imaging of a resolution target and a water flea.

  19. Human motion planning based on recursive dynamics and optimal control techniques

    NASA Technical Reports Server (NTRS)

    Lo, Janzen; Huang, Gang; Metaxas, Dimitris

    2002-01-01

    This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.

  20. Novel near-infrared sampling apparatus for single kernel analysis of oil content in maize.

    PubMed

    Janni, James; Weinstock, B André; Hagen, Lisa; Wright, Steve

    2008-04-01

    A method of rapid, nondestructive chemical and physical analysis of individual maize (Zea mays L.) kernels is needed for the development of high value food, feed, and fuel traits. Near-infrared (NIR) spectroscopy offers a robust nondestructive method of trait determination. However, traditional NIR bulk sampling techniques cannot be applied successfully to individual kernels. Obtaining optimized single kernel NIR spectra for applied chemometric predictive analysis requires a novel sampling technique that can account for the heterogeneous forms, morphologies, and opacities exhibited in individual maize kernels. In this study such a novel technique is described and compared to less effective means of single kernel NIR analysis. Results of the application of a partial least squares (PLS) derived model for predictive determination of percent oil content per individual kernel are shown.

  1. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  2. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    DTIC Science & Technology

    2016-04-01

    Interferometry 1.1 Chapter Overview In this Section, we introduce the physics -based principles of optical interferometry, thereby providing a foundation for...particular physical structure (i.e. the existence of a certain type of loop in the interferometric graph), and provide a simple algorithm for identifying...mathematical conditions for wrap invariance to a physical condition on aperture placement is more intuitive when considering the raw phase measurements as

  3. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    DTIC Science & Technology

    2016-04-01

    Chapter 1 Fundamentals of Optical Interferometry 1.1 Chapter Overview In this chapter, we introduce the physics -based principles of optical...particular physical structure (i.e. the existence of a certain type of loop in the interferometric graph), and provide a simple algorithm for... physical condition on aperture placement is more intuitive when considering the raw phase measurements as opposed to their closures. For this reason

  4. How Are Previous Physical Activity and Self-Efficacy Related to Future Physical Activity and Self-Efficacy?

    ERIC Educational Resources Information Center

    David, Prabu; Pennell, Michael L.; Foraker, Randi E.; Katz, Mira L.; Buckworth, Janet; Paskett, Electra D.

    2014-01-01

    Self-efficacy (SE) has been found to be a robust predictor of success in achieving physical activity (PA) goals. While much of the current research has focused on SE as a trait, SE as a state has received less attention. Using day-to-day measurements obtained over 84 days, we examined the relationship between state SE and PA. Postmenopausal women…

  5. Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework

    PubMed Central

    Antonopoulos, Georgios C.; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko

    2015-01-01

    A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available. PMID:26599984

  6. Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework.

    PubMed

    Antonopoulos, Georgios C; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko

    2015-01-01

    A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available.

  7. Practical quantum mechanics-based fragment methods for predicting molecular crystal properties.

    PubMed

    Wen, Shuhao; Nanda, Kaushik; Huang, Yuanhang; Beran, Gregory J O

    2012-06-07

    Significant advances in fragment-based electronic structure methods have created a real alternative to force-field and density functional techniques in condensed-phase problems such as molecular crystals. This perspective article highlights some of the important challenges in modeling molecular crystals and discusses techniques for addressing them. First, we survey recent developments in fragment-based methods for molecular crystals. Second, we use examples from our own recent research on a fragment-based QM/MM method, the hybrid many-body interaction (HMBI) model, to analyze the physical requirements for a practical and effective molecular crystal model chemistry. We demonstrate that it is possible to predict molecular crystal lattice energies to within a couple kJ mol(-1) and lattice parameters to within a few percent in small-molecule crystals. Fragment methods provide a systematically improvable approach to making predictions in the condensed phase, which is critical to making robust predictions regarding the subtle energy differences found in molecular crystals.

  8. Theory and applications of structured light single pixel imaging

    NASA Astrophysics Data System (ADS)

    Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.

    2018-02-01

    Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.

  9. The physics of life: one molecule at a time

    PubMed Central

    Leake, Mark C.

    2013-01-01

    The esteemed physicist Erwin Schrödinger, whose name is associated with the most notorious equation of quantum mechanics, also wrote a brief essay entitled ‘What is Life?’, asking: ‘How can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?’ The 60+ years following this seminal work have seen enormous developments in our understanding of biology on the molecular scale, with physics playing a key role in solving many central problems through the development and application of new physical science techniques, biophysical analysis and rigorous intellectual insight. The early days of single-molecule biophysics research was centred around molecular motors and biopolymers, largely divorced from a real physiological context. The new generation of single-molecule bioscience investigations has much greater scope, involving robust methods for understanding molecular-level details of the most fundamental biological processes in far more realistic, and technically challenging, physiological contexts, emerging into a new field of ‘single-molecule cellular biophysics’. Here, I outline how this new field has evolved, discuss the key active areas of current research and speculate on where this may all lead in the near future. PMID:23267186

  10. WE-D-BRB-02: Proton Treatment Planning and Beam Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pankuch, M.

    2016-06-15

    The goal of this session is to review the physics of proton therapy, treatment planning techniques, and the use of volumetric imaging in proton therapy. The course material covers the physics of proton interaction with matter and physical characteristics of clinical proton beams. It will provide information on proton delivery systems and beam delivery techniques for double scattering (DS), uniform scanning (US), and pencil beam scanning (PBS). The session covers the treatment planning strategies used in DS, US, and PBS for various anatomical sites, methods to address uncertainties in proton therapy and uncertainty mitigation to generate robust treatment plans. Itmore » introduces the audience to the current status of image guided proton therapy and clinical applications of CBCT for proton therapy. It outlines the importance of volumetric imaging in proton therapy. Learning Objectives: Gain knowledge in proton therapy physics, and treatment planning for proton therapy including intensity modulated proton therapy. The current state of volumetric image guidance equipment in proton therapy. Clinical applications of CBCT and its advantage over orthogonal imaging for proton therapy. B. Teo, B.K Teo had received travel funds from IBA in 2015.« less

  11. WE-D-BRB-03: Current State of Volumetric Image Guidance for Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hua, C.

    The goal of this session is to review the physics of proton therapy, treatment planning techniques, and the use of volumetric imaging in proton therapy. The course material covers the physics of proton interaction with matter and physical characteristics of clinical proton beams. It will provide information on proton delivery systems and beam delivery techniques for double scattering (DS), uniform scanning (US), and pencil beam scanning (PBS). The session covers the treatment planning strategies used in DS, US, and PBS for various anatomical sites, methods to address uncertainties in proton therapy and uncertainty mitigation to generate robust treatment plans. Itmore » introduces the audience to the current status of image guided proton therapy and clinical applications of CBCT for proton therapy. It outlines the importance of volumetric imaging in proton therapy. Learning Objectives: Gain knowledge in proton therapy physics, and treatment planning for proton therapy including intensity modulated proton therapy. The current state of volumetric image guidance equipment in proton therapy. Clinical applications of CBCT and its advantage over orthogonal imaging for proton therapy. B. Teo, B.K Teo had received travel funds from IBA in 2015.« less

  12. WE-D-BRB-04: Clinical Applications of CBCT for Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teo, B.

    The goal of this session is to review the physics of proton therapy, treatment planning techniques, and the use of volumetric imaging in proton therapy. The course material covers the physics of proton interaction with matter and physical characteristics of clinical proton beams. It will provide information on proton delivery systems and beam delivery techniques for double scattering (DS), uniform scanning (US), and pencil beam scanning (PBS). The session covers the treatment planning strategies used in DS, US, and PBS for various anatomical sites, methods to address uncertainties in proton therapy and uncertainty mitigation to generate robust treatment plans. Itmore » introduces the audience to the current status of image guided proton therapy and clinical applications of CBCT for proton therapy. It outlines the importance of volumetric imaging in proton therapy. Learning Objectives: Gain knowledge in proton therapy physics, and treatment planning for proton therapy including intensity modulated proton therapy. The current state of volumetric image guidance equipment in proton therapy. Clinical applications of CBCT and its advantage over orthogonal imaging for proton therapy. B. Teo, B.K Teo had received travel funds from IBA in 2015.« less

  13. Physics-Based Preconditioning of a Compressible Flow Solver for Large-Scale Simulations of Additive Manufacturing Processes

    NASA Astrophysics Data System (ADS)

    Weston, Brian; Nourgaliev, Robert; Delplanque, Jean-Pierre

    2017-11-01

    We present a new block-based Schur complement preconditioner for simulating all-speed compressible flow with phase change. The conservation equations are discretized with a reconstructed Discontinuous Galerkin method and integrated in time with fully implicit time discretization schemes. The resulting set of non-linear equations is converged using a robust Newton-Krylov framework. Due to the stiffness of the underlying physics associated with stiff acoustic waves and viscous material strength effects, we solve for the primitive-variables (pressure, velocity, and temperature). To enable convergence of the highly ill-conditioned linearized systems, we develop a physics-based preconditioner, utilizing approximate block factorization techniques to reduce the fully-coupled 3×3 system to a pair of reduced 2×2 systems. We demonstrate that our preconditioned Newton-Krylov framework converges on very stiff multi-physics problems, corresponding to large CFL and Fourier numbers, with excellent algorithmic and parallel scalability. Results are shown for the classic lid-driven cavity flow problem as well as for 3D laser-induced phase change. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  14. Determination of the mechanical and physical properties of cartilage by coupling poroelastic-based finite element models of indentation with artificial neural networks.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Campoli, Gianni; Weinans, Harrie; Zadpoor, Amir A

    2016-03-21

    One of the most widely used techniques to determine the mechanical properties of cartilage is based on indentation tests and interpretation of the obtained force-time or displacement-time data. In the current computational approaches, one needs to simulate the indentation test with finite element models and use an optimization algorithm to estimate the mechanical properties of cartilage. The modeling procedure is cumbersome, and the simulations need to be repeated for every new experiment. For the first time, we propose a method for fast and accurate estimation of the mechanical and physical properties of cartilage as a poroelastic material with the aid of artificial neural networks. In our study, we used finite element models to simulate the indentation for poroelastic materials with wide combinations of mechanical and physical properties. The obtained force-time curves are then divided into three parts: the first two parts of the data is used for training and validation of an artificial neural network, while the third part is used for testing the trained network. The trained neural network receives the force-time curves as the input and provides the properties of cartilage as the output. We observed that the trained network could accurately predict the properties of cartilage within the range of properties for which it was trained. The mechanical and physical properties of cartilage could therefore be estimated very fast, since no additional finite element modeling is required once the neural network is trained. The robustness of the trained artificial neural network in determining the properties of cartilage based on noisy force-time data was assessed by introducing noise to the simulated force-time data. We found that the training procedure could be optimized so as to maximize the robustness of the neural network against noisy force-time data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Multivariate Statistical Analysis of Cigarette Design Feature Influence on ISO TNCO Yields.

    PubMed

    Agnew-Heard, Kimberly A; Lancaster, Vicki A; Bravo, Roberto; Watson, Clifford; Walters, Matthew J; Holman, Matthew R

    2016-06-20

    The aim of this study is to explore how differences in cigarette physical design parameters influence tar, nicotine, and carbon monoxide (TNCO) yields in mainstream smoke (MSS) using the International Organization of Standardization (ISO) smoking regimen. Standardized smoking methods were used to evaluate 50 U.S. domestic brand cigarettes and a reference cigarette representing a range of TNCO yields in MSS collected from linear smoking machines using a nonintense smoking regimen. Multivariate statistical methods were used to form clusters of cigarettes based on their ISO TNCO yields and then to explore the relationship between the ISO generated TNCO yields and the nine cigarette physical design parameters between and within each cluster simultaneously. The ISO generated TNCO yields in MSS are 1.1-17.0 mg tar/cigarette, 0.1-2.2 mg nicotine/cigarette, and 1.6-17.3 mg CO/cigarette. Cluster analysis divided the 51 cigarettes into five discrete clusters based on their ISO TNCO yields. No one physical parameter dominated across all clusters. Predicting ISO machine generated TNCO yields based on these nine physical design parameters is complex due to the correlation among and between the nine physical design parameters and TNCO yields. From these analyses, it is estimated that approximately 20% of the variability in the ISO generated TNCO yields comes from other parameters (e.g., filter material, filter type, inclusion of expanded or reconstituted tobacco, and tobacco blend composition, along with differences in tobacco leaf origin and stalk positions and added ingredients). A future article will examine the influence of these physical design parameters on TNCO yields under a Canadian Intense (CI) smoking regimen. Together, these papers will provide a more robust picture of the design features that contribute to TNCO exposure across the range of real world smoking patterns.

  16. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    PubMed

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  17. Phases and interfaces from real space atomically resolved data: Physics-based deep data image analysis

    DOE PAGES

    Vasudevan, Rama K.; Ziatdinov, Maxim; Jesse, Stephen; ...

    2016-08-12

    Advances in electron and scanning probe microscopies have led to a wealth of atomically resolved structural and electronic data, often with ~1–10 pm precision. However, knowledge generation from such data requires the development of a physics-based robust framework to link the observed structures to macroscopic chemical and physical descriptors, including single phase regions, order parameter fields, interfaces, and structural and topological defects. Here, we develop an approach based on a synergy of sliding window Fourier transform to capture the local analog of traditional structure factors combined with blind linear unmixing of the resultant 4D data set. This deep data analysismore » is ideally matched to the underlying physics of the problem and allows reconstruction of the a priori unknown structure factors of individual components and their spatial localization. We demonstrate the principles of this approach using a synthetic data set and further apply it for extracting chemical and physically relevant information from electron and scanning tunneling microscopy data. Furthermore, this method promises to dramatically speed up crystallographic analysis in atomically resolved data, paving the road toward automatic local structure–property determinations in crystalline and quasi-ordered systems, as well as systems with competing structural and electronic order parameters.« less

  18. Analytical and variational numerical methods for unstable miscible displacement flows in porous media

    NASA Astrophysics Data System (ADS)

    Scovazzi, Guglielmo; Wheeler, Mary F.; Mikelić, Andro; Lee, Sanghyun

    2017-04-01

    The miscible displacement of one fluid by another in a porous medium has received considerable attention in subsurface, environmental and petroleum engineering applications. When a fluid of higher mobility displaces another of lower mobility, unstable patterns - referred to as viscous fingering - may arise. Their physical and mathematical study has been the object of numerous investigations over the past century. The objective of this paper is to present a review of these contributions with particular emphasis on variational methods. These algorithms are tailored to real field applications thanks to their advanced features: handling of general complex geometries, robustness in the presence of rough tensor coefficients, low sensitivity to mesh orientation in advection dominated scenarios, and provable convergence with fully unstructured grids. This paper is dedicated to the memory of Dr. Jim Douglas Jr., for his seminal contributions to miscible displacement and variational numerical methods.

  19. Aperiodic Robust Model Predictive Control for Constrained Continuous-Time Nonlinear Systems: An Event-Triggered Approach.

    PubMed

    Liu, Changxin; Gao, Jian; Li, Huiping; Xu, Demin

    2018-05-01

    The event-triggered control is a promising solution to cyber-physical systems, such as networked control systems, multiagent systems, and large-scale intelligent systems. In this paper, we propose an event-triggered model predictive control (MPC) scheme for constrained continuous-time nonlinear systems with bounded disturbances. First, a time-varying tightened state constraint is computed to achieve robust constraint satisfaction, and an event-triggered scheduling strategy is designed in the framework of dual-mode MPC. Second, the sufficient conditions for ensuring feasibility and closed-loop robust stability are developed, respectively. We show that robust stability can be ensured and communication load can be reduced with the proposed MPC algorithm. Finally, numerical simulations and comparison studies are performed to verify the theoretical results.

  20. A robust in-situ warp-correction algorithm for VISAR streak camera data at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-02-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high energy density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  1. A Secure Information Framework with APRQ Properties

    NASA Astrophysics Data System (ADS)

    Rupa, Ch.

    2017-08-01

    Internet of the things is the most trending topics in the digital world. Security issues are rampant. In the corporate or institutional setting, security risks are apparent from the outset. Market leaders are unable to use the cryptographic techniques due to their complexities. Hence many bits of private information, including ID, are readily available for third parties to see and to utilize. There is a need to decrease the complexity and increase the robustness of the cryptographic approaches. In view of this, a new cryptographic technique as good encryption pact with adjacency, random prime number and quantum code properties has been proposed. Here, encryption can be done by using quantum photons with gray code. This approach uses the concepts of physics and mathematics with no external key exchange to improve the security of the data. It also reduces the key attacks by generation of a key at the party side instead of sharing. This method makes the security more robust than with the existing approach. Important properties of gray code and quantum are adjacency property and different photons to a single bit (0 or 1). These can reduce the avalanche effect. Cryptanalysis of the proposed method shows that it is resistant to various attacks and stronger than the existing approaches.

  2. Characterization, optimisation and process robustness of a co-processed mannitol for the development of orally disintegrating tablets.

    PubMed

    Soh, Josephine Lay Peng; Grachet, Maud; Whitlock, Mark; Lukas, Timothy

    2013-02-01

    This is a study to fully assess a commercially available co-processed mannitol for its usefulness as an off-the-shelf excipient for developing orally disintegrating tablets (ODTs) by direct compression on a pilot scale (up to 4 kg). This work encompassed material characterization, formulation optimisation and process robustness. Overall, this co-processed mannitol possessed favourable physical attributes including low hygroscopicity and compactibility. Two design-of-experiments (DoEs) were used to screen and optimise the placebo formulation. Xylitol and crospovidone concentrations were found to have the most significant impact on disintegration time (p < 0.05). Higher xylitol concentrations retarded disintegration. Avicel PH102 promoted faster disintegration than PH101, at higher levels of xylitol. Without xylitol, higher crospovidone concentrations yielded faster disintegration and reduced tablet friability. Lubrication sensitivity studies were later conducted at two fill loads, three levels for lubricant concentration and number of blend rotations. Even at 75% fill load, the design space plot showed that 1.5% lubricant and 300 blend revolutions were sufficient to manufacture ODTs with ≤ 0.1% friability and disintegrated within 15 s. This study also describes results using a modified disintegration method based on the texture analyzer as an alternative to the USP method.

  3. A spatially informative optic flow model of bee colony with saccadic flight strategy for global optimization.

    PubMed

    Das, Swagatam; Biswas, Subhodip; Panigrahi, Bijaya K; Kundu, Souvik; Basu, Debabrota

    2014-10-01

    This paper presents a novel search metaheuristic inspired from the physical interpretation of the optic flow of information in honeybees about the spatial surroundings that help them orient themselves and navigate through search space while foraging. The interpreted behavior combined with the minimal foraging is simulated by the artificial bee colony algorithm to develop a robust search technique that exhibits elevated performance in multidimensional objective space. Through detailed experimental study and rigorous analysis, we highlight the statistical superiority enjoyed by our algorithm over a wide variety of functions as compared to some highly competitive state-of-the-art methods.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorham, P.W.; /Hawaii U.; Allison, P.

    We report initial results of the Antarctic Impulsive Transient Antenna (ANITA) 2006-2007 Long Duration Balloon flight, which searched for evidence of the flux of cosmogenic neutrinos. ANITA flew for 35 days looking for radio impulses that might be due to the Askaryan effect in neutrino-induced electromagnetic showers within the Antarctic ice sheets. In our initial high-threshold robust analysis, no neutrino candidates are seen, with no physics background. In a non-signal horizontal-polarization channel, we do detect 6 events consistent with radio impulses from extensive air showers, which helps to validate the effectiveness of our method. Upper limits derived from our analysismore » now begin to eliminate the highest cosmogenic neutrino models.« less

  5. Robustness of coevolution in resolving prisoner's dilemma games on interdependent networks subject to attack

    NASA Astrophysics Data System (ADS)

    Liu, Penghui; Liu, Jing

    2017-08-01

    Recently, coevolution between strategy and network structure has been established as a rule to resolve social dilemmas and reach optimal situations for cooperation. Many follow-up researches have focused on studying how coevolution helps networks reorganize to deter the defectors and many coevolution methods have been proposed. However, the robustness of the coevolution rules against attacks have not been studied much. Since attacks may directly influence the original evolutionary process of cooperation, the robustness should be an important index while evaluating the quality of a coevolution method. In this paper, we focus on investigating the robustness of an elementary coevolution method in resolving the prisoner's dilemma game upon the interdependent networks. Three different types of time-independent attacks, named as edge attacks, instigation attacks and node attacks have been employed to test its robustness. Through analyzing the simulation results obtained, we find this coevolution method is relatively robust against the edge attack and the node attack as it successfully maintains cooperation in the population over the entire attack range. However, when the instigation probability of the attacked individuals is large or the attack range of instigation attack is wide enough, coevolutionary rule finally fails in maintaining cooperation in the population.

  6. Comparisons of Robustness and Sensitivity between Cancer and Normal Cells by Microarray Data

    PubMed Central

    Chu, Liang-Hui; Chen, Bor-Sen

    2008-01-01

    Robustness is defined as the ability to uphold performance in face of perturbations and uncertainties, and sensitivity is a measure of the system deviations generated by perturbations to the system. While cancer appears as a robust but fragile system, few computational and quantitative evidences demonstrate robustness tradeoffs in cancer. Microarrays have been widely applied to decipher gene expression signatures in human cancer research, and quantification of global gene expression profiles facilitates precise prediction and modeling of cancer in systems biology. We provide several efficient computational methods based on system and control theory to compare robustness and sensitivity between cancer and normal cells by microarray data. Measurement of robustness and sensitivity by linear stochastic model is introduced in this study, which shows oscillations in feedback loops of p53 and demonstrates robustness tradeoffs that cancer is a robust system with some extreme fragilities. In addition, we measure sensitivity of gene expression to perturbations in other gene expression and kinetic parameters, discuss nonlinear effects in feedback loops of p53 and extend our method to robustness-based cancer drug design. PMID:19259409

  7. Fast and robust reconstruction for fluorescence molecular tomography via a sparsity adaptive subspace pursuit method.

    PubMed

    Ye, Jinzuo; Chi, Chongwei; Xue, Zhenwen; Wu, Ping; An, Yu; Xu, Han; Zhang, Shuang; Tian, Jie

    2014-02-01

    Fluorescence molecular tomography (FMT), as a promising imaging modality, can three-dimensionally locate the specific tumor position in small animals. However, it remains challenging for effective and robust reconstruction of fluorescent probe distribution in animals. In this paper, we present a novel method based on sparsity adaptive subspace pursuit (SASP) for FMT reconstruction. Some innovative strategies including subspace projection, the bottom-up sparsity adaptive approach, and backtracking technique are associated with the SASP method, which guarantees the accuracy, efficiency, and robustness for FMT reconstruction. Three numerical experiments based on a mouse-mimicking heterogeneous phantom have been performed to validate the feasibility of the SASP method. The results show that the proposed SASP method can achieve satisfactory source localization with a bias less than 1mm; the efficiency of the method is much faster than mainstream reconstruction methods; and this approach is robust even under quite ill-posed condition. Furthermore, we have applied this method to an in vivo mouse model, and the results demonstrate the feasibility of the practical FMT application with the SASP method.

  8. Options for Robust Airfoil Optimization under Uncertainty

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Li, Wu

    2002-01-01

    A robust optimization method is developed to overcome point-optimization at the sampled design points. This method combines the best features from several preliminary methods proposed by the authors and their colleagues. The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of spline control points as design variables yet the resulting airfoil shape does not need to be smoothed, and (3) it allows the user to make a tradeoff between the level of optimization and the amount of computing time consumed. For illustration purposes, the robust optimization method is used to solve a lift-constrained drag minimization problem for a two-dimensional (2-D) airfoil in Euler flow with 20 geometric design variables.

  9. A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Hu, Xiangyun; Liu, Tianyou

    2014-07-01

    Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.

  10. Maximizing potential impact of experimental research into cognitive processes in health psychology: A systematic approach to material development.

    PubMed

    Hughes, Alicia M; Gordon, Rola; Chalder, Trudie; Hirsch, Colette R; Moss-Morris, Rona

    2016-11-01

    There is an abundance of research into cognitive processing biases in clinical psychology including the potential for applying cognitive bias modification techniques to assess the causal role of biases in maintaining anxiety and depression. Within the health psychology field, there is burgeoning interest in applying these experimental methods to assess potential cognitive biases in relation to physical health conditions and health-related behaviours. Experimental research in these areas could inform theoretical development by enabling measurement of implicit cognitive processes that may underlie unhelpful illness beliefs and help drive health-related behaviours. However, to date, there has been no systematic approach to adapting existing experimental paradigms for use within physical health research. Many studies fail to report how materials were developed for the population of interest or have used untested materials developed ad hoc. The lack of protocol for developing stimuli specificity has contributed to large heterogeneity in methodologies and findings. In this article, we emphasize the need for standardized methods for stimuli development and replication in experimental work, particularly as it extends beyond its original anxiety and depression scope to other physical conditions. We briefly describe the paradigms commonly used to assess cognitive biases in attention and interpretation and then describe the steps involved in comprehensive/robust stimuli development for attention and interpretation paradigms using illustrative examples from two conditions: chronic fatigue syndrome and breast cancer. This article highlights the value of preforming rigorous stimuli development and provides tools to aid researchers engage in this process. We believe this work is worthwhile to establish a body of high-quality and replicable experimental research within the health psychology literature. Statement of contribution What is already known on this subject? Cognitive biases (e.g., tendencies to attend to negative information and/or interpret ambiguous information in negative ways) have a causal role in maintaining anxiety and depression. There is mixed evidence of cognitive biases in physical health conditions and chronic illness; one reason for this may be the heterogeneous stimuli used to assess attention and interpretation biases in these conditions. What does this study add? Steps for comprehensive/robust stimuli development for attention and interpretation paradigms are presented. Illustrative examples are provided from two conditions: chronic fatigue syndrome and breast cancer. We provide tools to help researchers develop condition-specific materials for experimental studies. © 2016 The British Psychological Society.

  11. Evaluation of Ares-I Control System Robustness to Uncertain Aerodynamics and Flex Dynamics

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; VanTassel, Chris; Bedrossian, Nazareth; Hall, Charles; Spanos, Pol

    2008-01-01

    This paper discusses the application of robust control theory to evaluate robustness of the Ares-I control systems. Three techniques for estimating upper and lower bounds of uncertain parameters which yield stable closed-loop response are used here: (1) Monte Carlo analysis, (2) mu analysis, and (3) characteristic frequency response analysis. All three methods are used to evaluate stability envelopes of the Ares-I control systems with uncertain aerodynamics and flex dynamics. The results show that characteristic frequency response analysis is the most effective of these methods for assessing robustness.

  12. Robust stabilization of the Space Station in the presence of inertia matrix uncertainty

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Liu, Qiang; Sunkel, John

    1993-01-01

    This paper presents a robust H-infinity full-state feedback control synthesis method for uncertain systems with D11 not equal to 0. The method is applied to the robust stabilization problem of the Space Station in the face of inertia matrix uncertainty. The control design objective is to find a robust controller that yields the largest stable hypercube in uncertain parameter space, while satisfying the nominal performance requirements. The significance of employing an uncertain plant model with D11 not equal 0 is demonstrated.

  13. Repeatability and validity of a standardised maximal step-up test for leg function-a diagnostic accuracy study

    PubMed Central

    2011-01-01

    Background Objectively assessed physical performance is a strong predictor for morbidity and premature death and there is an increasing interest in the role of sarcopenia in many chronic diseases. There is a need for robust and valid functional tests in clinical practice. Therefore, the repeatability and validity of a newly developed maximal step up test (MST) was assessed. Methods The MST, assessing maximal step-up height (MSH) in 3-cm increments, was evaluated in 60 healthy middle-aged subjects, 30 women and 30 men. The repeatability of MSH and the correlation between MSH and isokinetic knee extension peak torque (IKEPT), self-reported physical function (SF-36, PF), patient demographics and self-reported physical activity were investigated. Results The repeatability between occasions and between testers was 6 cm. MSH (range 12-45 cm) was significantly correlated to IKEPT, (r = 0.68, P < 0.001), SF-36 PF score, (r = 0.29, P = 0.03), sex, age, weight and BMI. The results also show that MSH above 32 cm discriminates subjects in our study with no limitation in self-reported physical function. Conclusions The standardised MST is considered a reliable leg function test for clinical practice. The MSH was related to knee extension strength and self-reported physical function. The precision of the MST for identification of limitations in physical function needs further investigation. PMID:21854575

  14. Fundamental Physics from Observations of White Dwarf Stars

    NASA Astrophysics Data System (ADS)

    Bainbridge, M. B.; Barstow, M. A.; Reindl, N.; Barrow, J. D.; Webb, J. K.; Hu, J.; Preval, S. P.; Holberg, J. B.; Nave, G.; Tchang-Brillet, L.; Ayres, T. R.

    2017-03-01

    Variation in fundamental constants provide an important test of theories of grand unification. Potentially, white dwarf spectra allow us to directly observe variation in fundamental constants at locations of high gravitational potential. We study hot, metal polluted white dwarf stars, combining far-UV spectroscopic observations, atomic physics, atmospheric modelling and fundamental physics, in the search for variation in the fine structure constant. This registers as small but measurable shifts in the observed wavelengths of highly ionized Fe and Ni lines when compared to laboratory wavelengths. Measurements of these shifts were performed by Berengut et al (2013) using high-resolution STIS spectra of G191-B2B, demonstrating the validity of the method. We have extended this work by; (a) using new (high precision) laboratory wavelengths, (b) refining the analysis methodology (incorporating robust techniques from previous studies towards quasars), and (c) enlarging the sample of white dwarf spectra. A successful detection would be the first direct measurement of a gravitational field effect on a bare constant of nature. We describe our approach and present preliminary results.

  15. Interactions of double patterning technology with wafer processing, OPC and design flows

    NASA Astrophysics Data System (ADS)

    Lucas, Kevin; Cork, Chris; Miloslavsky, Alex; Luk-Pat, Gerry; Barnes, Levi; Hapli, John; Lewellen, John; Rollins, Greg; Wiaux, Vincent; Verhaegen, Staf

    2008-03-01

    Double patterning technology (DPT) is one of the main options for printing logic devices with half-pitch less than 45nm; and flash and DRAM memory devices with half-pitch less than 40nm. DPT methods decompose the original design intent into two individual masking layers which are each patterned using single exposures and existing 193nm lithography tools. The results of the individual patterning layers combine to re-create the design intent pattern on the wafer. In this paper we study interactions of DPT with lithography, masks synthesis and physical design flows. Double exposure and etch patterning steps create complexity for both process and design flows. DPT decomposition is a critical software step which will be performed in physical design and also in mask synthesis. Decomposition includes cutting (splitting) of original design intent polygons into multiple polygons where required; and coloring of the resulting polygons. We evaluate the ability to meet key physical design goals such as: reduce circuit area; minimize rework; ensure DPT compliance; guarantee patterning robustness on individual layer targets; ensure symmetric wafer results; and create uniform wafer density for the individual patterning layers.

  16. A hybrid multi-objective imperialist competitive algorithm and Monte Carlo method for robust safety design of a rail vehicle

    NASA Astrophysics Data System (ADS)

    Nejlaoui, Mohamed; Houidi, Ajmi; Affi, Zouhaier; Romdhane, Lotfi

    2017-10-01

    This paper deals with the robust safety design optimization of a rail vehicle system moving in short radius curved tracks. A combined multi-objective imperialist competitive algorithm and Monte Carlo method is developed and used for the robust multi-objective optimization of the rail vehicle system. This robust optimization of rail vehicle safety considers simultaneously the derailment angle and its standard deviation where the design parameters uncertainties are considered. The obtained results showed that the robust design reduces significantly the sensitivity of the rail vehicle safety to the design parameters uncertainties compared to the determinist one and to the literature results.

  17. Molecular simulation of the thermophysical properties and phase behaviour of impure CO2 relevant to CCS.

    PubMed

    Cresswell, Alexander J; Wheatley, Richard J; Wilkinson, Richard D; Graham, Richard S

    2016-10-20

    Impurities from the CCS chain can greatly influence the physical properties of CO 2 . This has important design, safety and cost implications for the compression, transport and storage of CO 2 . There is an urgent need to understand and predict the properties of impure CO 2 to assist with CCS implementation. However, CCS presents demanding modelling requirements. A suitable model must both accurately and robustly predict CO 2 phase behaviour over a wide range of temperatures and pressures, and maintain that predictive power for CO 2 mixtures with numerous, mutually interacting chemical species. A promising technique to address this task is molecular simulation. It offers a molecular approach, with foundations in firmly established physical principles, along with the potential to predict the wide range of physical properties required for CCS. The quality of predictions from molecular simulation depends on accurate force-fields to describe the interactions between CO 2 and other molecules. Unfortunately, there is currently no universally applicable method to obtain force-fields suitable for molecular simulation. In this paper we present two methods of obtaining force-fields: the first being semi-empirical and the second using ab initio quantum-chemical calculations. In the first approach we optimise the impurity force-field against measurements of the phase and pressure-volume behaviour of CO 2 binary mixtures with N 2 , O 2 , Ar and H 2 . A gradient-free optimiser allows us to use the simulation itself as the underlying model. This leads to accurate and robust predictions under conditions relevant to CCS. In the second approach we use quantum-chemical calculations to produce ab initio evaluations of the interactions between CO 2 and relevant impurities, taking N 2 as an exemplar. We use a modest number of these calculations to train a machine-learning algorithm, known as a Gaussian process, to describe these data. The resulting model is then able to accurately predict a much broader set of ab initio force-field calculations at comparatively low numerical cost. Although our method is not yet ready to be implemented in a molecular simulation, we outline the necessary steps here. Such simulations have the potential to deliver first-principles simulation of the thermodynamic properties of impure CO 2 , without fitting to experimental data.

  18. Tensor Network Wavefunctions for Topological Phases

    NASA Astrophysics Data System (ADS)

    Ware, Brayden Alexander

    The combination of quantum effects and interactions in quantum many-body systems can result in exotic phases with fundamentally entangled ground state wavefunctions--topological phases. Topological phases come in two types, both of which will be studied in this thesis. In topologically ordered phases, the pattern of entanglement in the ground state wavefunction encodes the statistics of exotic emergent excitations, a universal indicator of a phase that is robust to all types of perturbations. In symmetry protected topological phases, the entanglement instead encodes a universal response of the system to symmetry defects, an indicator that is robust only to perturbations respecting the protecting symmetry. Finding and creating these phases in physical systems is a motivating challenge that tests all aspects--analytical, numerical, and experimental--of our understanding of the quantum many-body problem. Nearly three decades ago, the creation of simple ansatz wavefunctions--such as the Laughlin fractional quantum hall state, the AKLT state, and the resonating valence bond state--spurred analytical understanding of both the role of entanglement in topological physics and physical mechanisms by which it can arise. However, quantitative understanding of the relevant phase diagrams is still challenging. For this purpose, tensor networks provide a toolbox for systematically improving wavefunction ansatz while still capturing the relevant entanglement properties. In this thesis, we use the tools of entanglement and tensor networks to analyze ansatz states for several proposed new phases. In the first part, we study a featureless phase of bosons on the honeycomb lattice and argue that this phase can be topologically protected under any one of several distinct subsets of the crystalline lattice symmetries. We discuss methods of detecting such phases with entanglement and without. In the second part, we consider the problem of constructing fixed-point wavefunctions for intrinsically fermionic topological phases, i.e. topological phases contructed out of fermions with a nontrivial response to fermion parity defects. A zero correlation length wavefunction and a commuting projector Hamiltonian that realizes this wavefunction as its ground state are constructed. Using an appropriate generalization of the minimally entangled states method for extraction of topological order from the ground states on a torus to the intrinsically fermionic case, we fully characterize the corresponding topological order as Ising x (px - ipy). We argue that this phase can be captured using fermionic tensor networks, expanding the applicability of tensor network methods.

  19. Improving Large Cetacean Implantable Satellite Tag Designs to Maximize Tag Robustness and Minimize Health Effects to Individual Animals

    DTIC Science & Technology

    2013-09-30

    Designs to Maximize Tag Robustness and Minimize Health Effects to Individual Animals Alexandre N. Zerbini Cascadia Research Collective 218 ½ 4th...the blubber-muscle interface and minimize physical and physiological effects of body penetrating tags to individual animals . OBJECTIVES (1...integrity of designs created in Objective (1) during laboratory experiments and in cetacean carcasses ; (3) Examine structural tissue damage in the

  20. Multi-criteria robustness analysis of metro networks

    NASA Astrophysics Data System (ADS)

    Wang, Xiangrong; Koç, Yakup; Derrible, Sybil; Ahmad, Sk Nasir; Pino, Willem J. A.; Kooij, Robert E.

    2017-05-01

    Metros (heavy rail transit systems) are integral parts of urban transportation systems. Failures in their operations can have serious impacts on urban mobility, and measuring their robustness is therefore critical. Moreover, as physical networks, metros can be viewed as topological entities, and as such they possess measurable network properties. In this article, by using network science and graph theory, we investigate ten theoretical and four numerical robustness metrics and their performance in quantifying the robustness of 33 metro networks under random failures or targeted attacks. We find that the ten theoretical metrics capture two distinct aspects of robustness of metro networks. First, several metrics place an emphasis on alternative paths. Second, other metrics place an emphasis on the length of the paths. To account for all aspects, we standardize all ten indicators and plot them on radar diagrams to assess the overall robustness for metro networks. Overall, we find that Tokyo and Rome are the most robust networks. Rome benefits from short transferring and Tokyo has a significant number of transfer stations, both in the city center and in the peripheral area of the city, promoting both a higher number of alternative paths and overall relatively short path-lengths.

  1. How simple autonomous decisions evolve into robust behaviours? A review from neurorobotics, cognitive, self-organized and artificial immune systems fields.

    PubMed

    Fernandez-Leon, Jose A; Acosta, Gerardo G; Rozenfeld, Alejandro

    2014-10-01

    Researchers in diverse fields, such as in neuroscience, systems biology and autonomous robotics, have been intrigued by the origin and mechanisms for biological robustness. Darwinian evolution, in general, has suggested that adaptive mechanisms as a way of reaching robustness, could evolve by natural selection acting successively on numerous heritable variations. However, is this understanding enough for realizing how biological systems remain robust during their interactions with the surroundings? Here, we describe selected studies of bio-inspired systems that show behavioral robustness. From neurorobotics, cognitive, self-organizing and artificial immune system perspectives, our discussions focus mainly on how robust behaviors evolve or emerge in these systems, having the capacity of interacting with their surroundings. These descriptions are twofold. Initially, we introduce examples from autonomous robotics to illustrate how the process of designing robust control can be idealized in complex environments for autonomous navigation in terrain and underwater vehicles. We also include descriptions of bio-inspired self-organizing systems. Then, we introduce other studies that contextualize experimental evolution with simulated organisms and physical robots to exemplify how the process of natural selection can lead to the evolution of robustness by means of adaptive behaviors. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Religion, Spirituality, and Physical Health in Cancer Patients: A Meta-Analysis

    PubMed Central

    Jim, Heather S.L.; Pustejovsky, James; Park, Crystal L.; Danhauer, Suzanne C.; Sherman, Allen C.; Fitchett, George; Merluzzi, Thomas V.; Munoz, Alexis R.; George, Login; Snyder, Mallory A.; Salsman, John M.

    2015-01-01

    Background Whereas religion/spirituality (R/S) is important in its own right for many cancer patients, a large body of research has examined whether R/S is also associated with better physical health outcomes. This literature has been characterized by heterogeneity in sample composition, measures of R/S, and measures of physical health. In an effort to synthesize previous findings, we conducted a meta-analysis of the relationship between R/S and patient-reported physical health in cancer patients. Methods A search of PubMed, PsycInfo, CINAHL, and Cochrane Library yielded 2,073 abstracts, which were independently evaluated by pairs of raters. Meta-analysis was conducted on 497 effect sizes from 101 unique samples encompassing over 32,000 adult cancer patients. R/S measures were categorized into affective, behavioral, cognitive, and ‘other’ dimensions. Physical health measures were categorized into physical well-being, functional well-being, and physical symptoms. Average estimated correlations (Fisher's z) were calculated using generalized estimating equations with robust variance estimation. Results Overall R/S was associated with overall physical health (z=.153, p<.001); this relationship was not moderated by sociodemographic or clinical variables. Affective R/S was associated with physical well-being (z=.167, p<.001), functional well-being (z=.343, p<.001), and physical symptoms (z=.282, p<.001). Cognitive R/S was associated with physical well-being (z=.079, p<.05) and functional well-being (z=.090, p<.01). ‘Other’ R/S was associated with functional well-being (z=.100, p<.05). Conclusions Results of the current meta-analysis suggest that greater R/S is associated with better patient-reported physical health. These results underscore the importance of attending to patients’ religious and spiritual needs as part of comprehensive cancer care. PMID:26258868

  3. Robust power spectral estimation for EEG data

    PubMed Central

    Melman, Tamar; Victor, Jonathan D.

    2016-01-01

    Background Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. New method Using the multitaper method[1] as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Results Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. Comparison to existing method The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. Conclusion In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. PMID:27102041

  4. Non-Hermitian bidirectional robust transport

    NASA Astrophysics Data System (ADS)

    Longhi, Stefano

    2017-01-01

    Transport of quantum or classical waves in open systems is known to be strongly affected by non-Hermitian terms that arise from an effective description of system-environment interaction. A simple and paradigmatic example of non-Hermitian transport, originally introduced by Hatano and Nelson two decades ago [N. Hatano and D. R. Nelson, Phys. Rev. Lett. 77, 570 (1996), 10.1103/PhysRevLett.77.570], is the hopping dynamics of a quantum particle on a one-dimensional tight-binding lattice in the presence of an imaginary vectorial potential. The imaginary gauge field can prevent Anderson localization via non-Hermitian delocalization, opening up a mobility region and realizing robust transport immune to disorder and backscattering. Like for robust transport of topologically protected edge states in quantum Hall and topological insulator systems, non-Hermitian robust transport in the Hatano-Nelson model is unidirectional. However, there is not any physical impediment to observe robust bidirectional non-Hermitian transport. Here it is shown that in a quasi-one-dimensional zigzag lattice, with non-Hermitian (imaginary) hopping amplitudes and a synthetic gauge field, robust transport immune to backscattering can occur bidirectionally along the lattice.

  5. Robust Online Monitoring for Calibration Assessment of Transmitters and Instrumentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramuhalli, Pradeep; Coble, Jamie B.; Shumaker, Brent

    Robust online monitoring (OLM) technologies are expected to enable the extension or elimination of periodic sensor calibration intervals in operating and new reactors. These advances in OLM technologies will improve the safety and reliability of current and planned nuclear power systems through improved accuracy and increased reliability of sensors used to monitor key parameters. In this article, we discuss an overview of research being performed within the Nuclear Energy Enabling Technologies (NEET)/Advanced Sensors and Instrumentation (ASI) program, for the development of OLM algorithms to use sensor outputs and, in combination with other available information, 1) determine whether one or moremore » sensors are out of calibration or failing and 2) replace a failing sensor with reliable, accurate sensor outputs. Algorithm development is focused on the following OLM functions: • Signal validation • Virtual sensing • Sensor response-time assessment These algorithms incorporate, at their base, a Gaussian Process-based uncertainty quantification (UQ) method. Various plant models (using kernel regression, GP, or hierarchical models) may be used to predict sensor responses under various plant conditions. These predicted responses can then be applied in fault detection (sensor output and response time) and in computing the correct value (virtual sensing) of a failing physical sensor. The methods being evaluated in this work can compute confidence levels along with the predicted sensor responses, and as a result, may have the potential for compensating for sensor drift in real-time (online recalibration). Evaluation was conducted using data from multiple sources (laboratory flow loops and plant data). Ongoing research in this project is focused on further evaluation of the algorithms, optimization for accuracy and computational efficiency, and integration into a suite of tools for robust OLM that are applicable to monitoring sensor calibration state in nuclear power plants.« less

  6. The significance and robustness of a plasma free amino acid (PFAA) profile-based multiplex function for detecting lung cancer

    PubMed Central

    2013-01-01

    Background We have recently reported on the changes in plasma free amino acid (PFAA) profiles in lung cancer patients and the efficacy of a PFAA-based, multivariate discrimination index for the early detection of lung cancer. In this study, we aimed to verify the usefulness and robustness of PFAA profiling for detecting lung cancer using new test samples. Methods Plasma samples were collected from 171 lung cancer patients and 3849 controls without apparent cancer. PFAA levels were measured by high-performance liquid chromatography (HPLC)–electrospray ionization (ESI)–mass spectrometry (MS). Results High reproducibility was observed for both the change in the PFAA profiles in the lung cancer patients and the discriminating performance for lung cancer patients compared to previously reported results. Furthermore, multivariate discriminating functions obtained in previous studies clearly distinguished the lung cancer patients from the controls based on the area under the receiver-operator characteristics curve (AUC of ROC = 0.731 ~ 0.806), strongly suggesting the robustness of the methodology for clinical use. Moreover, the results suggested that the combinatorial use of this classifier and tumor markers improves the clinical performance of tumor markers. Conclusions These findings suggest that PFAA profiling, which involves a relatively simple plasma assay and imposes a low physical burden on subjects, has great potential for improving early detection of lung cancer. PMID:23409863

  7. Tuning Monotonic Basin Hopping: Improving the Efficiency of Stochastic Search as Applied to Low-Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Englander, Arnold C.

    2014-01-01

    Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.

  8. Ab Initio Determinations of Photoelectron Spectra Including Vibronic Features: An Upper-Level Undergraduate Physical Chemistry Laboratory

    ERIC Educational Resources Information Center

    Lord, Richard L.; Davis, Lisa; Millam, Evan L.; Brown, Eric; Offerman, Chad; Wray, Paul; Green, Susan M. E.

    2008-01-01

    We present a first-principles determination of the photoelectron spectra of water and hypochlorous acid as a laboratory exercise accessible to students in an undergraduate physical chemistry course. This paper demonstrates the robustness and user-friendliness of software developed for the Franck-Condon factor calculation. While the calculator is…

  9. Mathematical study on robust tissue pattern formation in growing epididymal tubule.

    PubMed

    Hirashima, Tsuyoshi

    2016-10-21

    Tissue pattern formation during development is a reproducible morphogenetic process organized by a series of kinetic cellular activities, leading to the building of functional and stable organs. Recent studies focusing on mechanical aspects have revealed physical mechanisms on how the cellular activities contribute to the formation of reproducible tissue patterns; however, the understanding for what factors achieve the reproducibility of such patterning and how it occurs is far from complete. Here, I focus on a tube pattern formation during murine epididymal development, and show that two factors influencing physical design for the patterning, the proliferative zone within the tubule and the viscosity of tissues surrounding to the tubule, control the reproducibility of epididymal tubule pattern, using a mathematical model based on experimental data. Extensive numerical simulation of the simple mathematical model revealed that a spatially localized proliferative zone within the tubule, observed in experiments, results in more reproducible tubule pattern. Moreover, I found that the viscosity of tissues surrounding to the tubule imposes a trade-off regarding pattern reproducibility and spatial accuracy relating to the region where the tubule pattern is formed. This indicates an existence of optimality in material properties of tissues for the robust patterning of epididymal tubule. The results obtained by numerical analysis based on experimental observations provide a general insight on how physical design realizes robust tissue pattern formation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Recovery Schemes for Primitive Variables in General-relativistic Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Siegel, Daniel M.; Mösta, Philipp; Desai, Dhruv; Wu, Samantha

    2018-05-01

    General-relativistic magnetohydrodynamic (GRMHD) simulations are an important tool to study a variety of astrophysical systems such as neutron star mergers, core-collapse supernovae, and accretion onto compact objects. A conservative GRMHD scheme numerically evolves a set of conservation equations for “conserved” quantities and requires the computation of certain primitive variables at every time step. This recovery procedure constitutes a core part of any conservative GRMHD scheme and it is closely tied to the equation of state (EOS) of the fluid. In the quest to include nuclear physics, weak interactions, and neutrino physics, state-of-the-art GRMHD simulations employ finite-temperature, composition-dependent EOSs. While different schemes have individually been proposed, the recovery problem still remains a major source of error, failure, and inefficiency in GRMHD simulations with advanced microphysics. The strengths and weaknesses of the different schemes when compared to each other remain unclear. Here we present the first systematic comparison of various recovery schemes used in different dynamical spacetime GRMHD codes for both analytic and tabulated microphysical EOSs. We assess the schemes in terms of (i) speed, (ii) accuracy, and (iii) robustness. We find large variations among the different schemes and that there is not a single ideal scheme. While the computationally most efficient schemes are less robust, the most robust schemes are computationally less efficient. More robust schemes may require an order of magnitude more calls to the EOS, which are computationally expensive. We propose an optimal strategy of an efficient three-dimensional Newton–Raphson scheme and a slower but more robust one-dimensional scheme as a fall-back.

  11. Efficient robust doubly adaptive regularized regression with applications.

    PubMed

    Karunamuni, Rohana J; Kong, Linglong; Tu, Wei

    2018-01-01

    We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.

  12. Towards quantifying uncertainty in predictions of Amazon 'dieback'.

    PubMed

    Huntingford, Chris; Fisher, Rosie A; Mercado, Lina; Booth, Ben B B; Sitch, Stephen; Harris, Phil P; Cox, Peter M; Jones, Chris D; Betts, Richard A; Malhi, Yadvinder; Harris, Glen R; Collins, Mat; Moorcroft, Paul

    2008-05-27

    Simulations with the Hadley Centre general circulation model (HadCM3), including carbon cycle model and forced by a 'business-as-usual' emissions scenario, predict a rapid loss of Amazonian rainforest from the middle of this century onwards. The robustness of this projection to both uncertainty in physical climate drivers and the formulation of the land surface scheme is investigated. We analyse how the modelled vegetation cover in Amazonia responds to (i) uncertainty in the parameters specified in the atmosphere component of HadCM3 and their associated influence on predicted surface climate. We then enhance the land surface description and (ii) implement a multilayer canopy light interception model and compare with the simple 'big-leaf' approach used in the original simulations. Finally, (iii) we investigate the effect of changing the method of simulating vegetation dynamics from an area-based model (TRIFFID) to a more complex size- and age-structured approximation of an individual-based model (ecosystem demography). We find that the loss of Amazonian rainforest is robust across the climate uncertainty explored by perturbed physics simulations covering a wide range of global climate sensitivity. The introduction of the refined light interception model leads to an increase in simulated gross plant carbon uptake for the present day, but, with altered respiration, the net effect is a decrease in net primary productivity. However, this does not significantly affect the carbon loss from vegetation and soil as a consequence of future simulated depletion in soil moisture; the Amazon forest is still lost. The introduction of the more sophisticated dynamic vegetation model reduces but does not halt the rate of forest dieback. The potential for human-induced climate change to trigger the loss of Amazon rainforest appears robust within the context of the uncertainties explored in this paper. Some further uncertainties should be explored, particularly with respect to the representation of rooting depth.

  13. A simple, robust and efficient high-order accurate shock-capturing scheme for compressible flows: Towards minimalism

    NASA Astrophysics Data System (ADS)

    Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi

    2018-06-01

    Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.

  14. Multi-criteria multi-stakeholder decision analysis using a fuzzy-stochastic approach for hydrosystem management

    NASA Astrophysics Data System (ADS)

    Subagadis, Y. H.; Schütze, N.; Grundmann, J.

    2014-09-01

    The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.

  15. Robust nanogenerators based on graft copolymers via control of dielectrics for remarkable output power enhancement

    PubMed Central

    Lee, Jae Won; Cho, Hye Jin; Chun, Jinsung; Kim, Kyeong Nam; Kim, Seongsu; Ahn, Chang Won; Kim, Ill Won; Kim, Ju-Young; Kim, Sang-Woo; Yang, Changduk; Baik, Jeong Min

    2017-01-01

    A robust nanogenerator based on poly(tert-butyl acrylate) (PtBA)–grafted polyvinylidene difluoride (PVDF) copolymers via dielectric constant control through an atom-transfer radical polymerization technique, which can markedly increase the output power, is demonstrated. The copolymer is mainly composed of α phases with enhanced dipole moments due to the π-bonding and polar characteristics of the ester functional groups in the PtBA, resulting in the increase of dielectric constant values by approximately twice, supported by Kelvin probe force microscopy measurements. This increase in the dielectric constant significantly increased the density of the charges that can be accumulated on the copolymer during physical contact. The nanogenerator generates output signals of 105 V and 25 μA/cm2, a 20-fold enhancement in output power, compared to pristine PVDF–based nanogenerator after tuning the surface potential using a poling method. The markedly enhanced output performance is quite stable and reliable in harsh mechanical environments due to the high flexibility of the films. On the basis of these results, a much faster charging characteristic is demonstrated in this study. PMID:28560339

  16. Approximating natural connectivity of scale-free networks based on largest eigenvalue

    NASA Astrophysics Data System (ADS)

    Tan, S.-Y.; Wu, J.; Li, M.-J.; Lu, X.

    2016-06-01

    It has been recently proposed that natural connectivity can be used to efficiently characterize the robustness of complex networks. The natural connectivity has an intuitive physical meaning and a simple mathematical formulation, which corresponds to an average eigenvalue calculated from the graph spectrum. However, as a network model close to the real-world system that widely exists, the scale-free network is found difficult to obtain its spectrum analytically. In this article, we investigate the approximation of natural connectivity based on the largest eigenvalue in both random and correlated scale-free networks. It is demonstrated that the natural connectivity of scale-free networks can be dominated by the largest eigenvalue, which can be expressed asymptotically and analytically to approximate natural connectivity with small errors. Then we show that the natural connectivity of random scale-free networks increases linearly with the average degree given the scaling exponent and decreases monotonically with the scaling exponent given the average degree. Moreover, it is found that, given the degree distribution, the more assortative a scale-free network is, the more robust it is. Experiments in real networks validate our methods and results.

  17. Biologically-inspired data decorrelation for hyper-spectral imaging

    NASA Astrophysics Data System (ADS)

    Picon, Artzai; Ghita, Ovidiu; Rodriguez-Vaamonde, Sergio; Iriondo, Pedro Ma; Whelan, Paul F.

    2011-12-01

    Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors) is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA), linear discriminant analysis (LDA), wavelet decomposition (WD), or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral) characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classification

  18. Observed Parent-Child Relationship Quality Predicts Antibody Response to Vaccination in Children

    PubMed Central

    O'Connor, Thomas G; Wang, Hongyue; Moynihan, Jan A; Wyman, Peter A.; Carnahan, Jennifer; Lofthus, Gerry; Quataert, Sally A.; Bowman, Melissa; Burke, Anne S.; Caserta, Mary T

    2015-01-01

    Background Quality of the parent-child relationship is a robust predictor of behavioral and emotional health for children and adolescents; the application to physical health is less clear. Methods We investigated the links between observed parent-child relationship quality in an interaction task and antibody response to meningococcal conjugate vaccine in a longitudinal study of 164 ambulatory 10-11 year-old children; additional analyses examine associations with cortisol reactivity, BMI, and somatic illness. Results Observed negative/conflict behavior in the interaction task predicted a less robust antibody response to meningococcal serotype C vaccine in the child over a 6 month-period, after controlling for socio-economic and other covariates. Observer rated interaction conflict also predicted increased cortisol reactivity following the interaction task and higher BMI, but these factors did not account for the link between relationship quality and antibody response. Conclusions The results begin to document the degree to which a major source of child stress exposure, parent-child relationship conflict, is associated with altered immune system development in children, and may constitute an important public health consideration. PMID:25862953

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Kuang; Libisch, Florian; Carter, Emily A., E-mail: eac@princeton.edu

    We report a new implementation of the density functional embedding theory (DFET) in the VASP code, using the projector-augmented-wave (PAW) formalism. Newly developed algorithms allow us to efficiently perform optimized effective potential optimizations within PAW. The new algorithm generates robust and physically correct embedding potentials, as we verified using several test systems including a covalently bound molecule, a metal surface, and bulk semiconductors. We show that with the resulting embedding potential, embedded cluster models can reproduce the electronic structure of point defects in bulk semiconductors, thereby demonstrating the validity of DFET in semiconductors for the first time. Compared to ourmore » previous version, the new implementation of DFET within VASP affords use of all features of VASP (e.g., a systematic PAW library, a wide selection of functionals, a more flexible choice of U correction formalisms, and faster computational speed) with DFET. Furthermore, our results are fairly robust with respect to both plane-wave and Gaussian type orbital basis sets in the embedded cluster calculations. This suggests that the density functional embedding method is potentially an accurate and efficient way to study properties of isolated defects in semiconductors.« less

  20. Controlling Tensegrity Robots Through Evolution

    NASA Technical Reports Server (NTRS)

    Iscen, Atil; Agogino, Adrian; SunSpiral, Vytas; Tumer, Kagan

    2013-01-01

    Tensegrity structures (built from interconnected rods and cables) have the potential to offer a revolutionary new robotic design that is light-weight, energy-efficient, robust to failures, capable of unique modes of locomotion, impact tolerant, and compliant (reducing damage between the robot and its environment). Unfortunately robots built from tensegrity structures are difficult to control with traditional methods due to their oscillatory nature, nonlinear coupling between components and overall complexity. Fortunately this formidable control challenge can be overcome through the use of evolutionary algorithms. In this paper we show that evolutionary algorithms can be used to efficiently control a ball-shaped tensegrity robot. Experimental results performed with a variety of evolutionary algorithms in a detailed soft-body physics simulator show that a centralized evolutionary algorithm performs 400 percent better than a hand-coded solution, while the multi-agent evolution performs 800 percent better. In addition, evolution is able to discover diverse control solutions (both crawling and rolling) that are robust against structural failures and can be adapted to a wide range of energy and actuation constraints. These successful controls will form the basis for building high-performance tensegrity robots in the near future.

  1. Hierarchical design of an electro-hydraulic actuator based on robust LPV methods

    NASA Astrophysics Data System (ADS)

    Németh, Balázs; Varga, Balázs; Gáspár, Péter

    2015-08-01

    The paper proposes a hierarchical control design of an electro-hydraulic actuator, which is used to improve the roll stability of vehicles. The purpose of the control system is to generate a reference torque, which is required by the vehicle dynamic control. The control-oriented model of the actuator is formulated in two subsystems. The high-level hydromotor is described in a linear form, while the low-level spool valve is a polynomial system. These subsystems require different control strategies. At the high level, a linear parameter-varying control is used to guarantee performance specifications. At the low level, a control Lyapunov-function-based algorithm, which creates discrete control input values of the valve, is proposed. The interaction between the two subsystems is guaranteed by the spool displacement, which is control input at the high level and must be tracked at the low-level control. The spool displacement has physical constraints, which must also be incorporated into the control design. The robust design of the high-level control incorporates the imprecision of the low-level control as an uncertainty of the system.

  2. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Goebel, Kai

    2011-01-01

    Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  3. An improved method of measuring heart rate using a webcam

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Ouyang, Jianfei; Yan, Yonggang

    2014-09-01

    Measuring heart rate traditionally requires special equipment and physical contact with the subject. Reliable non-contact and low-cost measurements are highly desirable for convenient and comfortable physiological self-assessment. Previous work has shown that consumer-grade cameras can provide useful signals for remote heart rate measurements. In this paper a simple and robust method of measuring the heart rate using low-cost webcam is proposed. Blood volume pulse is extracted by proper Region of Interest (ROI) and color channel selection from image sequences of human faces without complex computation. Heart rate is subsequently quantified by spectrum analysis. The method is successfully applied under natural lighting conditions. Results of experiments show that it takes less time, is much simpler, and has similar accuracy to the previously published and widely used method of Independent Component Analysis (ICA). Benefitting from non-contact, convenience, and low-costs, it provides great promise for popularization of home healthcare and can further be applied to biomedical research.

  4. Radiation-Tolerance Assessment of a Redundant Wireless Device

    NASA Astrophysics Data System (ADS)

    Huang, Q.; Jiang, J.

    2018-01-01

    This paper presents a method to evaluate radiation-tolerance without physical tests for a commercial off-the-shelf (COTS)-based monitoring device for high level radiation fields, such as those found in post-accident conditions in a nuclear power plant (NPP). This paper specifically describes the analysis of radiation environment in a severe accident, radiation damages in electronics, and the redundant solution used to prolong the life of the system, as well as the evaluation method for radiation protection and the analysis method of system reliability. As a case study, a wireless monitoring device with redundant and diversified channels is evaluated by using the developed method. The study results and system assessment data show that, under the given radiation condition, performance of the redundant device is more reliable and more robust than those non-redundant devices. The developed redundant wireless monitoring device is therefore able to apply in those conditions (up to 10 M Rad (Si)) during a severe accident in a NPP.

  5. A self-taught artificial agent for multi-physics computational model personalization.

    PubMed

    Neumann, Dominik; Mansi, Tommaso; Itu, Lucian; Georgescu, Bogdan; Kayvanpour, Elham; Sedaghat-Hamedani, Farbod; Amr, Ali; Haas, Jan; Katus, Hugo; Meder, Benjamin; Steidl, Stefan; Hornegger, Joachim; Comaniciu, Dorin

    2016-12-01

    Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust (up to 11% higher success rates) and with faster (up to seven times) convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model. Copyright © 2016. Published by Elsevier B.V.

  6. Robust statistical methods for impulse noise suppressing of spread spectrum induced polarization data, with application to a mine site, Gansu province, China

    NASA Astrophysics Data System (ADS)

    Liu, Weiqiang; Chen, Rujun; Cai, Hongzhu; Luo, Weibin

    2016-12-01

    In this paper, we investigated the robust processing of noisy spread spectrum induced polarization (SSIP) data. SSIP is a new frequency domain induced polarization method that transmits pseudo-random m-sequence as source current where m-sequence is a broadband signal. The potential information at multiple frequencies can be obtained through measurement. Removing the noise is a crucial problem for SSIP data processing. Considering that if the ordinary mean stack and digital filter are not capable of reducing the impulse noise effectively in SSIP data processing, the impact of impulse noise will remain in the complex resistivity spectrum that will affect the interpretation of profile anomalies. We implemented a robust statistical method to SSIP data processing. The robust least-squares regression is used to fit and remove the linear trend from the original data before stacking. The robust M estimate is used to stack the data of all periods. The robust smooth filter is used to suppress the residual noise for data after stacking. For robust statistical scheme, the most appropriate influence function and iterative algorithm are chosen by testing the simulated data to suppress the outliers' influence. We tested the benefits of the robust SSIP data processing using examples of SSIP data recorded in a test site beside a mine in Gansu province, China.

  7. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    NASA Astrophysics Data System (ADS)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  8. Mathematics of the total alkalinity-pH equation - pathway to robust and universal solution algorithms: the SolveSAPHE package v1.0.1

    NASA Astrophysics Data System (ADS)

    Munhoven, G.

    2013-08-01

    The total alkalinity-pH equation, which relates total alkalinity and pH for a given set of total concentrations of the acid-base systems that contribute to total alkalinity in a given water sample, is reviewed and its mathematical properties established. We prove that the equation function is strictly monotone and always has exactly one positive root. Different commonly used approximations are discussed and compared. An original method to derive appropriate initial values for the iterative solution of the cubic polynomial equation based upon carbonate-borate-alkalinity is presented. We then review different methods that have been used to solve the total alkalinity-pH equation, with a main focus on biogeochemical models. The shortcomings and limitations of these methods are made out and discussed. We then present two variants of a new, robust and universally convergent algorithm to solve the total alkalinity-pH equation. This algorithm does not require any a priori knowledge of the solution. SolveSAPHE (Solver Suite for Alkalinity-PH Equations) provides reference implementations of several variants of the new algorithm in Fortran 90, together with new implementations of other, previously published solvers. The new iterative procedure is shown to converge from any starting value to the physical solution. The extra computational cost for the convergence security is only 10-15% compared to the fastest algorithm in our test series.

  9. A Data-driven Study of RR Lyrae Near-IR Light Curves: Principal Component Analysis, Robust Fits, and Metallicity Estimates

    NASA Astrophysics Data System (ADS)

    Hajdu, Gergely; Dékány, István; Catelan, Márcio; Grebel, Eva K.; Jurcsik, Johanna

    2018-04-01

    RR Lyrae variables are widely used tracers of Galactic halo structure and kinematics, but they can also serve to constrain the distribution of the old stellar population in the Galactic bulge. With the aim of improving their near-infrared photometric characterization, we investigate their near-infrared light curves, as well as the empirical relationships between their light curve and metallicities using machine learning methods. We introduce a new, robust method for the estimation of the light-curve shapes, hence the average magnitudes of RR Lyrae variables in the K S band, by utilizing the first few principal components (PCs) as basis vectors, obtained from the PC analysis of a training set of light curves. Furthermore, we use the amplitudes of these PCs to predict the light-curve shape of each star in the J-band, allowing us to precisely determine their average magnitudes (hence colors), even in cases where only one J measurement is available. Finally, we demonstrate that the K S-band light-curve parameters of RR Lyrae variables, together with the period, allow the estimation of the metallicity of individual stars with an accuracy of ∼0.2–0.25 dex, providing valuable chemical information about old stellar populations bearing RR Lyrae variables. The methods presented here can be straightforwardly adopted for other classes of variable stars, bands, or for the estimation of other physical quantities.

  10. Efficient and Robust Optimization for Building Energy Simulation

    PubMed Central

    Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda

    2016-01-01

    Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell’s Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell’s method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell’s Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell’s Hybrid method presently used in HVACSIM+. PMID:27325907

  11. Efficient and Robust Optimization for Building Energy Simulation.

    PubMed

    Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda

    2016-06-15

    Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell's Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell's method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell's Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell's Hybrid method presently used in HVACSIM+.

  12. Reader reaction to "a robust method for estimating optimal treatment regimes" by Zhang et al. (2012).

    PubMed

    Taylor, Jeremy M G; Cheng, Wenting; Foster, Jared C

    2015-03-01

    A recent article (Zhang et al., 2012, Biometrics 168, 1010-1018) compares regression based and inverse probability based methods of estimating an optimal treatment regime and shows for a small number of covariates that inverse probability weighted methods are more robust to model misspecification than regression methods. We demonstrate that using models that fit the data better reduces the concern about non-robustness for the regression methods. We extend the simulation study of Zhang et al. (2012, Biometrics 168, 1010-1018), also considering the situation of a larger number of covariates, and show that incorporating random forests into both regression and inverse probability weighted based methods improves their properties. © 2014, The International Biometric Society.

  13. Determination of Charge-Carrier Mobility in Disordered Thin-Film Solar Cells as a Function of Current Density

    NASA Astrophysics Data System (ADS)

    Mäckel, Helmut; MacKenzie, Roderick C. I.

    2018-03-01

    Charge-carrier mobility is a fundamental material parameter, which plays an important role in determining solar-cell efficiency. The higher the mobility, the less time a charge carrier will spend in a device and the less likely it is that it will be lost to recombination. Despite the importance of this physical property, it is notoriously difficult to measure accurately in disordered thin-film solar cells under operating conditions. We, therefore, investigate a method previously proposed in the literature for the determination of mobility as a function of current density. The method is based on a simple analytical model that relates the mobility to carrier density and transport resistance. By revising the theoretical background of the method, we clearly demonstrate what type of mobility can be extracted (constant mobility or effective mobility of electrons and holes). We generalize the method to any combination of measurements that is able to determine the mean electron and hole carrier density, and the transport resistance at a given current density. We explore the robustness of the method by simulating typical organic solar-cell structures with a variety of physical properties, including unbalanced mobilities, unbalanced carrier densities, and for high or low carrier trapping rates. The simulations reveal that near VOC and JSC , the method fails due to the limitation of determining the transport resistance. However, away from these regions (and, importantly, around the maximum power point), the method can accurately determine charge-carrier mobility. In the presence of strong carrier trapping, the method overestimates the effective mobility due to an underestimation of the carrier density.

  14. Robust Neural Sliding Mode Control of Robot Manipulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen Tran Hiep; Pham Thuong Cat

    2009-03-05

    This paper proposes a robust neural sliding mode control method for robot tracking problem to overcome the noises and large uncertainties in robot dynamics. The Lyapunov direct method has been used to prove the stability of the overall system. Simulation results are given to illustrate the applicability of the proposed method.

  15. Robust Eye Center Localization through Face Alignment and Invariant Isocentric Patterns

    PubMed Central

    Teng, Dongdong; Chen, Dihu; Tan, Hongzhou

    2015-01-01

    The localization of eye centers is a very useful cue for numerous applications like face recognition, facial expression recognition, and the early screening of neurological pathologies. Several methods relying on available light for accurate eye-center localization have been exploited. However, despite the considerable improvements that eye-center localization systems have undergone in recent years, only few of these developments deal with the challenges posed by the profile (non-frontal face). In this paper, we first use the explicit shape regression method to obtain the rough location of the eye centers. Because this method extracts global information from the human face, it is robust against any changes in the eye region. We exploit this robustness and utilize it as a constraint. To locate the eye centers accurately, we employ isophote curvature features, the accuracy of which has been demonstrated in a previous study. By applying these features, we obtain a series of eye-center locations which are candidates for the actual position of the eye-center. Among these locations, the estimated locations which minimize the reconstruction error between the two methods mentioned above are taken as the closest approximation for the eye centers locations. Therefore, we combine explicit shape regression and isophote curvature feature analysis to achieve robustness and accuracy, respectively. In practical experiments, we use BioID and FERET datasets to test our approach to obtaining an accurate eye-center location while retaining robustness against changes in scale and pose. In addition, we apply our method to non-frontal faces to test its robustness and accuracy, which are essential in gaze estimation but have seldom been mentioned in previous works. Through extensive experimentation, we show that the proposed method can achieve a significant improvement in accuracy and robustness over state-of-the-art techniques, with our method ranking second in terms of accuracy. According to our implementation on a PC with a Xeon 2.5Ghz CPU, the frame rate of the eye tracking process can achieve 38 Hz. PMID:26426929

  16. Impact of mobility structure on optimization of small-world networks of mobile agents

    NASA Astrophysics Data System (ADS)

    Lee, Eun; Holme, Petter

    2016-06-01

    In ad hoc wireless networking, units are connected to each other rather than to a central, fixed, infrastructure. Constructing and maintaining such networks create several trade-off problems between robustness, communication speed, power consumption, etc., that bridges engineering, computer science and the physics of complex systems. In this work, we address the role of mobility patterns of the agents on the optimal tuning of a small-world type network construction method. By this method, the network is updated periodically and held static between the updates. We investigate the optimal updating times for different scenarios of the movement of agents (modeling, for example, the fat-tailed trip distances, and periodicities, of human travel). We find that these mobility patterns affect the power consumption in non-trivial ways and discuss how these effects can best be handled.

  17. Integral approximations to classical diffusion and smoothed particle hydrodynamics

    DOE PAGES

    Du, Qiang; Lehoucq, R. B.; Tartakovsky, A. M.

    2014-12-31

    The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary.more » The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. As a result, an immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.« less

  18. TU-AB-BRB-02: Stochastic Programming Methods for Handling Uncertainty and Motion in IMRT Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unkelbach, J.

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  19. TU-AB-BRB-00: New Methods to Ensure Target Coverage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  20. Robust Mediation Analysis Based on Median Regression

    PubMed Central

    Yuan, Ying; MacKinnon, David P.

    2014-01-01

    Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925

  1. A robust ridge regression approach in the presence of both multicollinearity and outliers in the data

    NASA Astrophysics Data System (ADS)

    Shariff, Nurul Sima Mohamad; Ferdaos, Nur Aqilah

    2017-08-01

    Multicollinearity often leads to inconsistent and unreliable parameter estimates in regression analysis. This situation will be more severe in the presence of outliers it will cause fatter tails in the error distributions than the normal distributions. The well-known procedure that is robust to multicollinearity problem is the ridge regression method. This method however is expected to be affected by the presence of outliers due to some assumptions imposed in the modeling procedure. Thus, the robust version of existing ridge method with some modification in the inverse matrix and the estimated response value is introduced. The performance of the proposed method is discussed and comparisons are made with several existing estimators namely, Ordinary Least Squares (OLS), ridge regression and robust ridge regression based on GM-estimates. The finding of this study is able to produce reliable parameter estimates in the presence of both multicollinearity and outliers in the data.

  2. Method for auto-alignment of digital optical phase conjugation systems based on digital propagation

    PubMed Central

    Jang, Mooseok; Ruan, Haowen; Zhou, Haojiang; Judkewitz, Benjamin; Yang, Changhuei

    2014-01-01

    Optical phase conjugation (OPC) has enabled many optical applications such as aberration correction and image transmission through fiber. In recent years, implementation of digital optical phase conjugation (DOPC) has opened up the possibility of its use in biomedical optics (e.g. deep-tissue optical focusing) due to its ability to provide greater-than-unity OPC reflectivity (the power ratio of the phase conjugated beam and input beam to the OPC system) and its flexibility to accommodate additional wavefront manipulations. However, the requirement for precise (pixel-to-pixel matching) alignment of the wavefront sensor and the spatial light modulator (SLM) limits the practical usability of DOPC systems. Here, we report a method for auto-alignment of a DOPC system by which the misalignment between the sensor and the SLM is auto-corrected through digital light propagation. With this method, we were able to accomplish OPC playback with a DOPC system with gross sensor-SLM misalignment by an axial displacement of up to~1.5 cm, rotation and tip/tilt of ~5∘, and in-plane displacement of ~5 mm (dependent on the physical size of the sensor and the SLM). Our auto-alignment method robustly achieved a DOPC playback peak-to-background ratio (PBR) corresponding to more than ~30 % of the theoretical maximum. As an additional advantage, the auto-alignment procedure can be easily performed at will and, as such, allows us to correct for small mechanical drifts within the DOPC systems, thus overcoming a previously major DOPC system vulnerability. We believe that this reported method for implementing robust DOPC systems will broaden the practical utility of DOPC systems. PMID:24977504

  3. Method for auto-alignment of digital optical phase conjugation systems based on digital propagation.

    PubMed

    Jang, Mooseok; Ruan, Haowen; Zhou, Haojiang; Judkewitz, Benjamin; Yang, Changhuei

    2014-06-16

    Optical phase conjugation (OPC) has enabled many optical applications such as aberration correction and image transmission through fiber. In recent years, implementation of digital optical phase conjugation (DOPC) has opened up the possibility of its use in biomedical optics (e.g. deep-tissue optical focusing) due to its ability to provide greater-than-unity OPC reflectivity (the power ratio of the phase conjugated beam and input beam to the OPC system) and its flexibility to accommodate additional wavefront manipulations. However, the requirement for precise (pixel-to-pixel matching) alignment of the wavefront sensor and the spatial light modulator (SLM) limits the practical usability of DOPC systems. Here, we report a method for auto-alignment of a DOPC system by which the misalignment between the sensor and the SLM is auto-corrected through digital light propagation. With this method, we were able to accomplish OPC playback with a DOPC system with gross sensor-SLM misalignment by an axial displacement of up to~1.5 cm, rotation and tip/tilt of ~5° and in-plane displacement of ~5 mm (dependent on the physical size of the sensor and the SLM). Our auto-alignment method robustly achieved a DOPC playback peak-to-background ratio (PBR) corresponding to more than ~30 % of the theoretical maximum. As an additional advantage, the auto-alignment procedure can be easily performed at will and, as such, allows us to correct for small mechanical drifts within the DOPC systems, thus overcoming a previously major DOPC system vulnerability. We believe that this reported method for implementing robust DOPC systems will broaden the practical utility of DOPC systems.

  4. Adaptive Critic Nonlinear Robust Control: A Survey.

    PubMed

    Wang, Ding; He, Haibo; Liu, Derong

    2017-10-01

    Adaptive dynamic programming (ADP) and reinforcement learning are quite relevant to each other when performing intelligent optimization. They are both regarded as promising methods involving important components of evaluation and improvement, at the background of information technology, such as artificial intelligence, big data, and deep learning. Although great progresses have been achieved and surveyed when addressing nonlinear optimal control problems, the research on robustness of ADP-based control strategies under uncertain environment has not been fully summarized. Hence, this survey reviews the recent main results of adaptive-critic-based robust control design of continuous-time nonlinear systems. The ADP-based nonlinear optimal regulation is reviewed, followed by robust stabilization of nonlinear systems with matched uncertainties, guaranteed cost control design of unmatched plants, and decentralized stabilization of interconnected systems. Additionally, further comprehensive discussions are presented, including event-based robust control design, improvement of the critic learning rule, nonlinear H ∞ control design, and several notes on future perspectives. By applying the ADP-based optimal and robust control methods to a practical power system and an overhead crane plant, two typical examples are provided to verify the effectiveness of theoretical results. Overall, this survey is beneficial to promote the development of adaptive critic control methods with robustness guarantee and the construction of higher level intelligent systems.

  5. Cardiovascular Outcomes and the Physical and Chemical Properties of Metal Ions Found in Particulate Matter Air Pollution: A QICAR Study

    PubMed Central

    Meng, Qingyu; Lu, Shou-En; Buckley, Barbara; Welsh, William J.; Whitsel, Eric A.; Hanna, Adel; Yeatts, Karin B.; Warren, Joshua; Herring, Amy H.; Xiu, Aijun

    2013-01-01

    Background: This paper presents an application of quantitative ion character–activity relationships (QICAR) to estimate associations of human cardiovascular (CV) diseases (CVDs) with a set of metal ion properties commonly observed in ambient air pollutants. QICAR has previously been used to predict ecotoxicity of inorganic metal ions based on ion properties. Objectives: The objective of this work was to examine potential associations of biological end points with a set of physical and chemical properties describing inorganic metal ions present in exposures using QICAR. Methods: Chemical and physical properties of 17 metal ions were obtained from peer-reviewed publications. Associations of cardiac arrhythmia, myocardial ischemia, myocardial infarction, stroke, and thrombosis with exposures to metal ions (measured as inference scores) were obtained from the Comparative Toxicogenomics Database (CTD). Robust regressions were applied to estimate the associations of CVDs with ion properties. Results: CVD was statistically significantly associated (Bonferroni-adjusted significance level of 0.003) with many ion properties reflecting ion size, solubility, oxidation potential, and abilities to form covalent and ionic bonds. The properties are relevant for reactive oxygen species (ROS) generation, which has been identified as a possible mechanism leading to CVDs. Conclusion: QICAR has the potential to complement existing epidemiologic methods for estimating associations between CVDs and air pollutant exposures by providing clues about the underlying mechanisms that may explain these associations. PMID:23462649

  6. Measuring the emulsification dynamics and stability of self-emulsifying drug delivery systems.

    PubMed

    Vasconcelos, Teófilo; Marques, Sara; Sarmento, Bruno

    2018-02-01

    Self-emulsifying drug delivery systems (SEDDS) are one of the most promising technologies in the drug delivery field, particularly for addressing solubility and bioavailability issues of drugs. The development of these drug carriers excessively relies in visual observations and indirect determinations. The present manuscript intended to describe a method able to measure the emulsification of SEDDS, both micro and nano-emulsions, able to measure the droplet size and to evaluate the physical stability of these formulations. Additionally, a new process to evaluate the physical stability of SEDDS after emulsification was also proposed, based on a cycle of mechanical stress followed by a resting period. The use of a multiparameter continuous evaluation during the emulsification process and stability was of upmost value to understand SEDDS emulsification process. Based on this method, SEDDS were classified as fast and slow emulsifiers. Moreover, emulsification process and stabilization of emulsion was subject of several considerations regarding the composition of SEDDS as major factor that affects stability to physical stress and the use of multicomponent with different properties to develop a stable and robust SEDDS formulation. Drug loading level is herein suggested to impact droplets size of SEDDS after dispersion and SEDDS stability to stress conditions. The proposed protocol allows an online measurement of SEDDS droplet size during emulsification and a rationale selection of excipients based on its emulsification and stabilization performance. Copyright © 2017. Published by Elsevier B.V.

  7. A Lagrangian discontinuous Galerkin hydrodynamic method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xiaodong; Morgan, Nathaniel Ray; Burton, Donald E.

    Here, we present a new Lagrangian discontinuous Galerkin (DG) hydrodynamic method for solving the two-dimensional gas dynamic equations on unstructured hybrid meshes. The physical conservation laws for the momentum and total energy are discretized using a DG method based on linear Taylor expansions. Three different approaches are investigated for calculating the density variation over the element. The first approach evolves a Taylor expansion of the specific volume field. The second approach follows certain finite element methods and uses the strong mass conservation to calculate the density field at a location inside the element or on the element surface. The thirdmore » approach evolves a Taylor expansion of the density field. The nodal velocity, and the corresponding forces, are explicitly calculated by solving a multidirectional approximate Riemann problem. An effective limiting strategy is presented that ensures monotonicity of the primitive variables. This new Lagrangian DG hydrodynamic method conserves mass, momentum, and total energy. Results from a suite of test problems are presented to demonstrate the robustness and expected second-order accuracy of this new method.« less

  8. A Lagrangian discontinuous Galerkin hydrodynamic method

    DOE PAGES

    Liu, Xiaodong; Morgan, Nathaniel Ray; Burton, Donald E.

    2017-12-11

    Here, we present a new Lagrangian discontinuous Galerkin (DG) hydrodynamic method for solving the two-dimensional gas dynamic equations on unstructured hybrid meshes. The physical conservation laws for the momentum and total energy are discretized using a DG method based on linear Taylor expansions. Three different approaches are investigated for calculating the density variation over the element. The first approach evolves a Taylor expansion of the specific volume field. The second approach follows certain finite element methods and uses the strong mass conservation to calculate the density field at a location inside the element or on the element surface. The thirdmore » approach evolves a Taylor expansion of the density field. The nodal velocity, and the corresponding forces, are explicitly calculated by solving a multidirectional approximate Riemann problem. An effective limiting strategy is presented that ensures monotonicity of the primitive variables. This new Lagrangian DG hydrodynamic method conserves mass, momentum, and total energy. Results from a suite of test problems are presented to demonstrate the robustness and expected second-order accuracy of this new method.« less

  9. Performance of thigh-mounted triaxial accelerometer algorithms in objective quantification of sedentary behaviour and physical activity in older adults

    PubMed Central

    Verschueren, Sabine M. P.; Degens, Hans; Morse, Christopher I.; Onambélé, Gladys L.

    2017-01-01

    Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual’s physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry. PMID:29155839

  10. Performance of thigh-mounted triaxial accelerometer algorithms in objective quantification of sedentary behaviour and physical activity in older adults.

    PubMed

    Wullems, Jorgen A; Verschueren, Sabine M P; Degens, Hans; Morse, Christopher I; Onambélé, Gladys L

    2017-01-01

    Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual's physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry.

  11. A Robust Shape Reconstruction Method for Facial Feature Point Detection.

    PubMed

    Tan, Shuqiu; Chen, Dongyi; Guo, Chenggang; Huang, Zhiqi

    2017-01-01

    Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  12. A Robust Image Watermarking in the Joint Time-Frequency Domain

    NASA Astrophysics Data System (ADS)

    Öztürk, Mahmut; Akan, Aydın; Çekiç, Yalçın

    2010-12-01

    With the rapid development of computers and internet applications, copyright protection of multimedia data has become an important problem. Watermarking techniques are proposed as a solution to copyright protection of digital media files. In this paper, a new, robust, and high-capacity watermarking method that is based on spatiofrequency (SF) representation is presented. We use the discrete evolutionary transform (DET) calculated by the Gabor expansion to represent an image in the joint SF domain. The watermark is embedded onto selected coefficients in the joint SF domain. Hence, by combining the advantages of spatial and spectral domain watermarking methods, a robust, invisible, secure, and high-capacity watermarking method is presented. A correlation-based detector is also proposed to detect and extract any possible watermarks on an image. The proposed watermarking method was tested on some commonly used test images under different signal processing attacks like additive noise, Wiener and Median filtering, JPEG compression, rotation, and cropping. Simulation results show that our method is robust against all of the attacks.

  13. Accurate and Robust Unitary Transformations of a High-Dimensional Quantum System

    NASA Astrophysics Data System (ADS)

    Anderson, B. E.; Sosa-Martinez, H.; Riofrío, C. A.; Deutsch, Ivan H.; Jessen, Poul S.

    2015-06-01

    Unitary transformations are the most general input-output maps available in closed quantum systems. Good control protocols have been developed for qubits, but questions remain about the use of optimal control theory to design unitary maps in high-dimensional Hilbert spaces, and about the feasibility of their robust implementation in the laboratory. Here we design and implement unitary maps in a 16-dimensional Hilbert space associated with the 6 S1 /2 ground state of 133Cs, achieving fidelities >0.98 with built-in robustness to static and dynamic perturbations. Our work has relevance for quantum information processing and provides a template for similar advances on other physical platforms.

  14. A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields

    NASA Astrophysics Data System (ADS)

    Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang

    2017-03-01

    Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features.

  15. Robust Dynamic Multi-objective Vehicle Routing Optimization Method.

    PubMed

    Guo, Yi-Nan; Cheng, Jian; Luo, Sha; Gong, Dun-Wei

    2017-03-21

    For dynamic multi-objective vehicle routing problems, the waiting time of vehicle, the number of serving vehicles, the total distance of routes were normally considered as the optimization objectives. Except for above objectives, fuel consumption that leads to the environmental pollution and energy consumption was focused on in this paper. Considering the vehicles' load and the driving distance, corresponding carbon emission model was built and set as an optimization objective. Dynamic multi-objective vehicle routing problems with hard time windows and randomly appeared dynamic customers, subsequently, were modeled. In existing planning methods, when the new service demand came up, global vehicle routing optimization method was triggered to find the optimal routes for non-served customers, which was time-consuming. Therefore, robust dynamic multi-objective vehicle routing method with two-phase is proposed. Three highlights of the novel method are: (i) After finding optimal robust virtual routes for all customers by adopting multi-objective particle swarm optimization in the first phase, static vehicle routes for static customers are formed by removing all dynamic customers from robust virtual routes in next phase. (ii)The dynamically appeared customers append to be served according to their service time and the vehicles' statues. Global vehicle routing optimization is triggered only when no suitable locations can be found for dynamic customers. (iii)A metric measuring the algorithms' robustness is given. The statistical results indicated that the routes obtained by the proposed method have better stability and robustness, but may be sub-optimum. Moreover, time-consuming global vehicle routing optimization is avoided as dynamic customers appear.

  16. Automated Design of Complex Dynamic Systems

    PubMed Central

    Hermans, Michiel; Schrauwen, Benjamin; Bienstman, Peter; Dambre, Joni

    2014-01-01

    Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems. PMID:24497969

  17. Robust power spectral estimation for EEG data.

    PubMed

    Melman, Tamar; Victor, Jonathan D

    2016-08-01

    Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. Using the multitaper method (Thomson, 1982) as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A study of the temporal robustness of the growing global container-shipping network

    PubMed Central

    Wang, Nuo; Wu, Nuan; Dong, Ling-ling; Yan, Hua-kun; Wu, Di

    2016-01-01

    Whether they thrive as they grow must be determined for all constantly expanding networks. However, few studies have focused on this important network feature or the development of quantitative analytical methods. Given the formation and growth of the global container-shipping network, we proposed the concept of network temporal robustness and quantitative method. As an example, we collected container liner companies’ data at two time points (2004 and 2014) and built a shipping network with ports as nodes and routes as links. We thus obtained a quantitative value of the temporal robustness. The temporal robustness is a significant network property because, for the first time, we can clearly recognize that the shipping network has become more vulnerable to damage over the last decade: When the node failure scale reached 50% of the entire network, the temporal robustness was approximately −0.51% for random errors and −12.63% for intentional attacks. The proposed concept and analytical method described in this paper are significant for other network studies. PMID:27713549

  19. Dissociative conceptual and quantitative problem solving outcomes across interactive engagement and traditional format introductory physics

    NASA Astrophysics Data System (ADS)

    McDaniel, Mark A.; Stoen, Siera M.; Frey, Regina F.; Markow, Zachary E.; Hynes, K. Mairin; Zhao, Jiuqing; Cahill, Michael J.

    2016-12-01

    The existing literature indicates that interactive-engagement (IE) based general physics classes improve conceptual learning relative to more traditional lecture-oriented classrooms. Very little research, however, has examined quantitative problem-solving outcomes from IE based relative to traditional lecture-based physics classes. The present study included both pre- and post-course conceptual-learning assessments and a new quantitative physics problem-solving assessment that included three representative conservation of energy problems from a first-semester calculus-based college physics course. Scores for problem translation, plan coherence, solution execution, and evaluation of solution plausibility were extracted for each problem. Over 450 students in three IE-based sections and two traditional lecture sections taught at the same university during the same semester participated. As expected, the IE-based course produced more robust gains on a Force Concept Inventory than did the lecture course. By contrast, when the full sample was considered, gains in quantitative problem solving were significantly greater for lecture than IE-based physics; when students were matched on pre-test scores, there was still no advantage for IE-based physics on gains in quantitative problem solving. Further, the association between performance on the concept inventory and quantitative problem solving was minimal. These results highlight that improved conceptual understanding does not necessarily support improved quantitative physics problem solving, and that the instructional method appears to have less bearing on gains in quantitative problem solving than does the kinds of problems emphasized in the courses and homework and the overlap of these problems to those on the assessment.

  20. Sex Differences in Concomitant Trajectories of Self-Reported Disability and Measured Physical Capacity in Older Adults

    PubMed Central

    Allore, Heather G.; Mendes de Leon, Carlos F.; Gahbauer, Evelyne A.; Gill, Thomas M.

    2016-01-01

    Background: Despite documented age-related declines in self-reported functional status and measured physical capacity, it is unclear whether these functional indicators follow similar trajectories over time or whether the patterns of change differ by sex. Methods: We used longitudinal data from 687 initially nondisabled adults, aged 70 or older, from the Precipitating Events Project, who were evaluated every 18 months for nearly 14 years. Self-reported disability was assessed with a 12-item disability scale. Physical capacity was measured using grip strength and a modified version of Short Physical Performance Battery. Hierarchical linear models estimated the intra-individual trajectory of each functional indicator and differences in trajectories’ intercept and slope by sex. Results: Self-reported disability, grip strength, and Short Physical Performance Battery score declined over 13.5 years following nonlinear trajectories. Women experienced faster accumulation of self-reported disability, but slower declines in measured physical capacity, compared with men. Trajectory intercepts revealed that women had significantly weaker grip strength and reported higher levels of disability compared with men, with no differences in starting Short Physical Performance Battery scores. These findings were robust to adjustments for differences in sociodemographic characteristics, length-of-survival, health risk factors, and chronic-disease status. Conclusions: Despite the female disadvantage in self-reported disability, older women preserve measured physical capacity better than men over time. Self-reported and measured indicators should be viewed as complementary rather than interchangeable assessments of functional status for both clinical and research purposes, especially for sex-specific comparisons. PMID:27071781

  1. Assessment of physical function and participation in chronic pain clinical trials: IMMPACT/OMERACT recommendations.

    PubMed

    Taylor, Ann M; Phillips, Kristine; Patel, Kushang V; Turk, Dennis C; Dworkin, Robert H; Beaton, Dorcas; Clauw, Daniel J; Gignac, Monique A M; Markman, John D; Williams, David A; Bujanover, Shay; Burke, Laurie B; Carr, Daniel B; Choy, Ernest H; Conaghan, Philip G; Cowan, Penney; Farrar, John T; Freeman, Roy; Gewandter, Jennifer; Gilron, Ian; Goli, Veeraindar; Gover, Tony D; Haddox, J David; Kerns, Robert D; Kopecky, Ernest A; Lee, David A; Malamut, Richard; Mease, Philip; Rappaport, Bob A; Simon, Lee S; Singh, Jasvinder A; Smith, Shannon M; Strand, Vibeke; Tugwell, Peter; Vanhove, Gertrude F; Veasley, Christin; Walco, Gary A; Wasan, Ajay D; Witter, James

    2016-09-01

    Although pain reduction is commonly the primary outcome in chronic pain clinical trials, physical functioning is also important. A challenge in designing chronic pain trials to determine efficacy and effectiveness of therapies is obtaining appropriate information about the impact of an intervention on physical function. The Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials (IMMPACT) and Outcome Measures in Rheumatology (OMERACT) convened a meeting to consider assessment of physical functioning and participation in research on chronic pain. The primary purpose of this article is to synthesize evidence on the scope of physical functioning to inform work on refining physical function outcome measurement. We address issues in assessing this broad construct and provide examples of frequently used measures of relevant concepts. Investigators can assess physical functioning using patient-reported outcome (PRO), performance-based, and objective measures of activity. This article aims to provide support for the use of these measures, covering broad aspects of functioning, including work participation, social participation, and caregiver burden, which researchers should consider when designing chronic pain clinical trials. Investigators should consider the inclusion of both PROs and performance-based measures as they provide different but also important complementary information. The development and use of reliable and valid PROs and performance-based measures of physical functioning may expedite development of treatments, and standardization of these measures has the potential to facilitate comparison across studies. We provide recommendations regarding important domains to stimulate research to develop tools that are more robust, address consistency and standardization, and engage patients early in tool development.

  2. Velocity-curvature patterns limit human-robot physical interaction

    PubMed Central

    Maurice, Pauline; Huber, Meghan E.; Hogan, Neville; Sternad, Dagmar

    2018-01-01

    Physical human-robot collaboration is becoming more common, both in industrial and service robotics. Cooperative execution of a task requires intuitive and efficient interaction between both actors. For humans, this means being able to predict and adapt to robot movements. Given that natural human movement exhibits several robust features, we examined whether human-robot physical interaction is facilitated when these features are considered in robot control. The present study investigated how humans adapt to biological and non-biological velocity patterns in robot movements. Participants held the end-effector of a robot that traced an elliptic path with either biological (two-thirds power law) or non-biological velocity profiles. Participants were instructed to minimize the force applied on the robot end-effector. Results showed that the applied force was significantly lower when the robot moved with a biological velocity pattern. With extensive practice and enhanced feedback, participants were able to decrease their force when following a non-biological velocity pattern, but never reached forces below those obtained with the 2/3 power law profile. These results suggest that some robust features observed in natural human movements are also a strong preference in guided movements. Therefore, such features should be considered in human-robot physical collaboration. PMID:29744380

  3. Velocity-curvature patterns limit human-robot physical interaction.

    PubMed

    Maurice, Pauline; Huber, Meghan E; Hogan, Neville; Sternad, Dagmar

    2018-01-01

    Physical human-robot collaboration is becoming more common, both in industrial and service robotics. Cooperative execution of a task requires intuitive and efficient interaction between both actors. For humans, this means being able to predict and adapt to robot movements. Given that natural human movement exhibits several robust features, we examined whether human-robot physical interaction is facilitated when these features are considered in robot control. The present study investigated how humans adapt to biological and non-biological velocity patterns in robot movements. Participants held the end-effector of a robot that traced an elliptic path with either biological (two-thirds power law) or non-biological velocity profiles. Participants were instructed to minimize the force applied on the robot end-effector. Results showed that the applied force was significantly lower when the robot moved with a biological velocity pattern. With extensive practice and enhanced feedback, participants were able to decrease their force when following a non-biological velocity pattern, but never reached forces below those obtained with the 2/3 power law profile. These results suggest that some robust features observed in natural human movements are also a strong preference in guided movements. Therefore, such features should be considered in human-robot physical collaboration.

  4. Including robustness in multi-criteria optimization for intensity-modulated proton therapy

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Unkelbach, Jan; Trofimov, Alexei; Madden, Thomas; Kooy, Hanne; Bortfeld, Thomas; Craft, David

    2012-02-01

    We present a method to include robustness in a multi-criteria optimization (MCO) framework for intensity-modulated proton therapy (IMPT). The approach allows one to simultaneously explore the trade-off between different objectives as well as the trade-off between robustness and nominal plan quality. In MCO, a database of plans each emphasizing different treatment planning objectives, is pre-computed to approximate the Pareto surface. An IMPT treatment plan that strikes the best balance between the different objectives can be selected by navigating on the Pareto surface. In our approach, robustness is integrated into MCO by adding robustified objectives and constraints to the MCO problem. Uncertainties (or errors) of the robust problem are modeled by pre-calculated dose-influence matrices for a nominal scenario and a number of pre-defined error scenarios (shifted patient positions, proton beam undershoot and overshoot). Objectives and constraints can be defined for the nominal scenario, thus characterizing nominal plan quality. A robustified objective represents the worst objective function value that can be realized for any of the error scenarios and thus provides a measure of plan robustness. The optimization method is based on a linear projection solver and is capable of handling large problem sizes resulting from a fine dose grid resolution, many scenarios, and a large number of proton pencil beams. A base-of-skull case is used to demonstrate the robust optimization method. It is demonstrated that the robust optimization method reduces the sensitivity of the treatment plan to setup and range errors to a degree that is not achieved by a safety margin approach. A chordoma case is analyzed in more detail to demonstrate the involved trade-offs between target underdose and brainstem sparing as well as robustness and nominal plan quality. The latter illustrates the advantage of MCO in the context of robust planning. For all cases examined, the robust optimization for each Pareto optimal plan takes less than 5 min on a standard computer, making a computationally friendly interface possible to the planner. In conclusion, the uncertainty pertinent to the IMPT procedure can be reduced during treatment planning by optimizing plans that emphasize different treatment objectives, including robustness, and then interactively seeking for a most-preferred one from the solution Pareto surface.

  5. Detecting outliers when fitting data with nonlinear regression – a new method based on robust nonlinear regression and the false discovery rate

    PubMed Central

    Motulsky, Harvey J; Brown, Ronald E

    2006-01-01

    Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949

  6. Longitudinal Associations between Physical Activity and Educational Outcomes

    PubMed Central

    KARI, JAANA T.; PEHKONEN, JAAKKO; HUTRI-KÄHÖNEN, NINA; RAITAKARI, OLLI T.; TAMMELIN, TUIJA H.

    2017-01-01

    ABSTRACT Purpose This longitudinal study examined the role of leisure-time physical activity in academic achievement at the end of compulsory basic education and educational attainment in adulthood. Methods The data were drawn from the ongoing longitudinal Cardiovascular Risk in Young Finns Study, which was combined with register-based data from Statistics Finland. The study consisted of children who were 12 yr (n = 1723, 49% boys) and 15 yr (n = 2445, 48% boys) of age at the time when physical activity was measured. The children were followed up until 2010, when their mean age was 40 yr. Physical activity was self-reported and included several measurements: overall leisure-time physical activity outside school hours, participation in sports club training sessions, and participation in sports competitions. Individuals’ educational outcomes were measured with the self-reported grade point average at age 15 yr and register-based information on the years of completed postcompulsory education in adulthood. Ordinary least squares models and the instrumental variable approach were used to analyze the relationship between physical activity and educational outcomes. Results Physical activity in adolescence was positively associated with educational outcomes. Both the physical activity level at age 15 yr and an increase in the physical activity level between the ages of 12 and 15 yr were positively related to the grade point average at age 15 yr and the years of postcompulsory education in adulthood. The results were robust to the inclusion of several individual and family background factors, including health endowments, family income, and parents’ education. Conclusion The results provide evidence that physical activity in adolescence may not only predict academic success during compulsory basic education but also boost educational outcomes later in life. PMID:29045322

  7. Semi-supervised anomaly detection - towards model-independent searches of new physics

    NASA Astrophysics Data System (ADS)

    Kuusela, Mikael; Vatanen, Tommi; Malmi, Eric; Raiko, Tapani; Aaltonen, Timo; Nagai, Yoshikazu

    2012-06-01

    Most classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an algorithm based on semi-supervised anomaly detection techniques, which does not require a MC training sample for the signal data. We first model the background using a multivariate Gaussian mixture model. We then search for deviations from this model by fitting to the observations a mixture of the background model and a number of additional Gaussians. This allows us to perform pattern recognition of any anomalous excess over the background. We show by a comparison to neural network classifiers that such an approach is a lot more robust against misspecification of the signal MC than supervised classification. In cases where there is an unexpected signal, a neural network might fail to correctly identify it, while anomaly detection does not suffer from such a limitation. On the other hand, when there are no systematic errors in the training data, both methods perform comparably.

  8. Fusion of Optimized Indicators from Advanced Driver Assistance Systems (ADAS) for Driver Drowsiness Detection

    PubMed Central

    Daza, Iván G.; Bergasa, Luis M.; Bronte, Sebastián; Yebes, J. Javier; Almazán, Javier; Arroyo, Roberto

    2014-01-01

    This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study. PMID:24412904

  9. Compositional data analysis for physical activity, sedentary time and sleep research.

    PubMed

    Dumuid, Dorothea; Stanford, Tyman E; Martin-Fernández, Josep-Antoni; Pedišić, Željko; Maher, Carol A; Lewis, Lucy K; Hron, Karel; Katzmarzyk, Peter T; Chaput, Jean-Philippe; Fogelholm, Mikael; Hu, Gang; Lambert, Estelle V; Maia, José; Sarmiento, Olga L; Standage, Martyn; Barreira, Tiago V; Broyles, Stephanie T; Tudor-Locke, Catrine; Tremblay, Mark S; Olds, Timothy

    2017-01-01

    The health effects of daily activity behaviours (physical activity, sedentary time and sleep) are widely studied. While previous research has largely examined activity behaviours in isolation, recent studies have adjusted for multiple behaviours. However, the inclusion of all activity behaviours in traditional multivariate analyses has not been possible due to the perfect multicollinearity of 24-h time budget data. The ensuing lack of adjustment for known effects on the outcome undermines the validity of study findings. We describe a statistical approach that enables the inclusion of all daily activity behaviours, based on the principles of compositional data analysis. Using data from the International Study of Childhood Obesity, Lifestyle and the Environment, we demonstrate the application of compositional multiple linear regression to estimate adiposity from children's daily activity behaviours expressed as isometric log-ratio coordinates. We present a novel method for predicting change in a continuous outcome based on relative changes within a composition, and for calculating associated confidence intervals to allow for statistical inference. The compositional data analysis presented overcomes the lack of adjustment that has plagued traditional statistical methods in the field, and provides robust and reliable insights into the health effects of daily activity behaviours.

  10. Advances in mechanistic understanding of release rate control mechanisms of extended-release hydrophilic matrix tablets.

    PubMed

    Timmins, Peter; Desai, Divyakant; Chen, Wei; Wray, Patrick; Brown, Jonathan; Hanley, Sarah

    2016-08-01

    Approaches to characterizing and developing understanding around the mechanisms that control the release of drugs from hydrophilic matrix tablets are reviewed. While historical context is provided and direct physical characterization methods are described, recent advances including the role of percolation thresholds, the application on magnetic resonance and other spectroscopic imaging techniques are considered. The influence of polymer and dosage form characteristics are reviewed. The utility of mathematical modeling is described. Finally, how all the information derived from applying the developed mechanistic understanding from all of these tools can be brought together to develop a robust and reliable hydrophilic matrix extended-release tablet formulation is proposed.

  11. Critical issues in sensor science to aid food and water safety.

    PubMed

    Farahi, R H; Passian, A; Tetard, L; Thundat, T

    2012-06-26

    The stability of food and water supplies is widely recognized as a global issue of fundamental importance. Sensor development for food and water safety by nonconventional assays continues to overcome technological challenges. The delicate balance between attaining adequate limits of detection, chemical fingerprinting of the target species, dealing with the complex food matrix, and operating in difficult environments are still the focus of current efforts. While the traditional pursuit of robust recognition methods remains important, emerging engineered nanomaterials and nanotechnology promise better sensor performance but also bring about new challenges. Both advanced receptor-based sensors and emerging non-receptor-based physical sensors are evaluated for their critical challenges toward out-of-laboratory applications.

  12. Monitoring outcomes for the Medicare Advantage program: methods and application of the VR-12 for evaluation of plans.

    PubMed

    Kazis, Lewis E; Selim, Alfredo J; Rogers, William; Qian, Shirley X; Brazier, John

    2012-01-01

    The Veterans RAND 12-Item Health Survey (VR-12) is one of the major patient-reported outcomes for ranking the Medicare Advantage (MA) plans in the Health Outcomes Survey (HOS). Approaches for scoring physical and mental health are given using contemporary norms and regression estimators. A new metric approach for the VR-12 called the "VR-6D" is presented with case-mix adjustments for monitoring plans that combine utilities and mortality. Results show that the models for ranking health outcomes of the plans are robust and credible. Future directions include the use of utilities for evaluating and ranking of MA plans.

  13. Contrast-based sensorless adaptive optics for retinal imaging.

    PubMed

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew

    2015-09-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes.

  14. Fast generating Greenberger-Horne-Zeilinger state via iterative interaction pictures

    NASA Astrophysics Data System (ADS)

    Huang, Bi-Hua; Chen, Ye-Hong; Wu, Qi-Cheng; Song, Jie; Xia, Yan

    2016-10-01

    We delve a little deeper into the construction of shortcuts to adiabatic passage for three-level systems by iterative interaction picture (multiple Schrödinger dynamics). As an application example, we use the deduced iterative based shortcuts to rapidly generate the Greenberger-Horne-Zeilinger (GHZ) state in a three-atom system with the help of quantum Zeno dynamics. Numerical simulation shows the dynamics designed by the iterative picture method is physically feasible and the shortcut scheme performs much better than that using the conventional adiabatic passage techniques. Also, the influences of various decoherence processes are discussed by numerical simulation and the results prove that the scheme is fast and robust against decoherence and operational imperfection.

  15. Permanent Rabi oscillations in coupled exciton-photon systems with PT -symmetry

    PubMed Central

    Chestnov, Igor Yu.; Demirchyan, Sevak S.; Alodjants, Alexander P.; Rubo, Yuri G.; Kavokin, Alexey V.

    2016-01-01

    We propose a physical mechanism which enables permanent Rabi oscillations in driven-dissipative condensates of exciton-polaritons in semiconductor microcavities subjected to external magnetic fields. The method is based on stimulated scattering of excitons from the incoherent reservoir. We demonstrate that permanent non-decaying oscillations may appear due to the parity-time symmetry of the coupled exciton-photon system realized in a specific regime of pumping to the exciton state and depletion of the reservoir. At non-zero exciton-photon detuning, robust permanent Rabi oscillations occur with unequal amplitudes of exciton and photon components. Our predictions pave way to realization of integrated circuits based on exciton-polariton Rabi oscillators. PMID:26790534

  16. Permanent Rabi oscillations in coupled exciton-photon systems with PT-symmetry.

    PubMed

    Chestnov, Igor Yu; Demirchyan, Sevak S; Alodjants, Alexander P; Rubo, Yuri G; Kavokin, Alexey V

    2016-01-21

    We propose a physical mechanism which enables permanent Rabi oscillations in driven-dissipative condensates of exciton-polaritons in semiconductor microcavities subjected to external magnetic fields. The method is based on stimulated scattering of excitons from the incoherent reservoir. We demonstrate that permanent non-decaying oscillations may appear due to the parity-time symmetry of the coupled exciton-photon system realized in a specific regime of pumping to the exciton state and depletion of the reservoir. At non-zero exciton-photon detuning, robust permanent Rabi oscillations occur with unequal amplitudes of exciton and photon components. Our predictions pave way to realization of integrated circuits based on exciton-polariton Rabi oscillators.

  17. Fitting of the Thomson scattering density and temperature profiles on the COMPASS tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stefanikova, E.; Division of Fusion Plasma Physics, KTH Royal Institute of Technology, SE-10691 Stockholm; Peterka, M.

    2016-11-15

    A new technique for fitting the full radial profiles of electron density and temperature obtained by the Thomson scattering diagnostic in H-mode discharges on the COMPASS tokamak is described. The technique combines the conventionally used modified hyperbolic tangent function for the edge transport barrier (pedestal) fitting and a modification of a Gaussian function for fitting the core plasma. Low number of parameters of this combined function and their straightforward interpretability and controllability provide a robust method for obtaining physically reasonable profile fits. Deconvolution with the diagnostic instrument function is applied on the profile fit, taking into account the dependence onmore » the actual magnetic configuration.« less

  18. Introduction to COFFE: The Next-Generation HPCMP CREATE-AV CFD Solver

    NASA Technical Reports Server (NTRS)

    Glasby, Ryan S.; Erwin, J. Taylor; Stefanski, Douglas L.; Allmaras, Steven R.; Galbraith, Marshall C.; Anderson, W. Kyle; Nichols, Robert H.

    2016-01-01

    HPCMP CREATE-AV Conservative Field Finite Element (COFFE) is a modular, extensible, robust numerical solver for the Navier-Stokes equations that invokes modularity and extensibility from its first principles. COFFE implores a flexible, class-based hierarchy that provides a modular approach consisting of discretization, physics, parallelization, and linear algebra components. These components are developed with modern software engineering principles to ensure ease of uptake from a user's or developer's perspective. The Streamwise Upwind/Petrov-Galerkin (SU/PG) method is utilized to discretize the compressible Reynolds-Averaged Navier-Stokes (RANS) equations tightly coupled with a variety of turbulence models. The mathematics and the philosophy of the methodology that makes up COFFE are presented.

  19. Distinguishing Majorana bound states and Andreev bound states with microwave spectra

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-Tao

    2018-04-01

    Majorana fermions are a fascinating and not yet confirmed quasiparticles in condensed matter physics. Here we propose using microwave spectra to distinguish Majorana bound states (MBSs) from topological trivial Andreev bound states. By numerically calculating the transmission and Zeeman field dependence of the many-body excitation spectrum of a 1D Josephson junction, we find that the two kinds of bound states have distinct responses to variations in the related parameters. Furthermore, the singular behaviors of the MBSs spectrum could be attributed to the robust fractional Josephson coupling and nonlocality of MBSs. Our results provide a feasible method to verify the existence of MBSs and could accelerate its application to topological quantum computation.

  20. Analysis and improvements of Adaptive Particle Refinement (APR) through CPU time, accuracy and robustness considerations

    NASA Astrophysics Data System (ADS)

    Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.

    2018-02-01

    While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.

  1. Robust design of microchannel cooler

    NASA Astrophysics Data System (ADS)

    He, Ye; Yang, Tao; Hu, Li; Li, Leimin

    2005-12-01

    Microchannel cooler has offered a new method for the cooling of high power diode lasers, with the advantages of small volume, high efficiency of thermal dissipation and low cost when mass-produced. In order to reduce the sensitivity of design to manufacture errors or other disturbances, Taguchi method that is one of robust design method was chosen to optimize three parameters important to the cooling performance of roof-like microchannel cooler. The hydromechanical and thermal mathematical model of varying section microchannel was calculated using finite volume method by FLUENT. A special program was written to realize the automation of the design process for improving efficiency. The optimal design is presented which compromises between optimal cooling performance and its robustness. This design method proves to be available.

  2. A Comparison of Different Methods for Evaluating Diet, Physical Activity, and Long-Term Weight Gain in 3 Prospective Cohort Studies123

    PubMed Central

    Smith, Jessica D; Hou, Tao; Hu, Frank B; Rimm, Eric B; Spiegelman, Donna; Willett, Walter C; Mozaffarian, Dariush

    2015-01-01

    Background: The insidious pace of long-term weight gain (∼1 lb/y or 0.45 kg/y) makes it difficult to study in trials; long-term prospective cohorts provide crucial evidence on its key contributors. Most previous studies have evaluated how prevalent lifestyle habits relate to future weight gain rather than to lifestyle changes, which may be more temporally and physiologically relevant. Objective: Our objective was to evaluate and compare different methodological approaches for investigating diet, physical activity (PA), and long-term weight gain. Methods: In 3 prospective cohorts (total n = 117,992), we assessed how lifestyle relates to long-term weight change (up to 24 y of follow-up) in 4-y periods by comparing 3 analytic approaches: 1) prevalent diet and PA and 4-y weight change (prevalent analysis); 2) 4-y changes in diet and PA with a 4-y weight change (change analysis); and 3) 4-y change in diet and PA with weight change in the subsequent 4 y (lagged-change analysis). We compared these approaches and evaluated the consistency across cohorts, magnitudes of associations, and biological plausibility of findings. Results: Across the 3 methods, consistent, robust, and biologically plausible associations were seen only for the change analysis. Results for prevalent or lagged-change analyses were less consistent across cohorts, smaller in magnitude, and biologically implausible. For example, for each serving of a sugar-sweetened beverage, the observed weight gain was 0.01 lb (95% CI: −0.08, 0.10) [0.005 kg (95% CI: −0.04, 0.05)] based on prevalent analysis; 0.99 lb (95% CI: 0.83, 1.16) [0.45 kg (95% CI: 0.38, 0.53)] based on change analysis; and 0.05 lb (95% CI: −0.10, 0.21) [0.02 kg (95% CI: −0.05, 0.10)] based on lagged-change analysis. Findings were similar for other foods and PA. Conclusions: Robust, consistent, and biologically plausible relations between lifestyle and long-term weight gain are seen when evaluating lifestyle changes and weight changes in discrete periods rather than in prevalent lifestyle or lagged changes. These findings inform the optimal methods for evaluating lifestyle and long-term weight gain and the potential for bias when other methods are used. PMID:26377763

  3. Physical frailty in late-life depression is associated with deficits in speed-dependent executive functions

    PubMed Central

    Potter, Guy G.; McQuoid, Douglas R.; Whitson, Heather E.; Steffens, David C.

    2015-01-01

    Objective To examine the association between physical frailty and neurocognitive performance in late-life depression (LLD). Methods Cross-sectional design using baseline data from a treatment study of late-life depression. Individuals aged 60 and older diagnosed with Major Depressive Disorder at time of assessment (N = 173). All participants received clinical assessment of depression and completed neuropsychological testing during a depressive episode. Physical frailty was assessed using an adaptation of the FRAIL scale. Neuropsychological domains were derived from a factor analysis that yielded three factors: 1) Speeded Executive and Fluency, Episodic Memory, and Working Memory. Associations were examined with bivariate tests and multivariate models. Results Depressed individuals with a FRAIL score >1 had worse performance than nonfrail depressed across all three factors; however, Speeded Executive and Fluency was the only factor that remained significant after controlling for depression symptom severity and demographic characteristics. Conclusions Although physical frailty is associated with broad neurocognitive deficits in LLD, it is most robustly associated with deficits in speeded executive functions and verbal fluency. Causal inferences are limited by the cross-sectional design, and future research would benefit from a comparison group of nondepressed older adults with similar levels of frailty. Research is needed to understand the mechanisms underlying associations among depression symptoms, physical frailty, and executive dysfunction, and how they are related to the cognitive and symptomatic course of LLD. PMID:26313370

  4. Stability characterization and modeling of robust distributed benthic microbial fuel cell (DBMFC) system.

    PubMed

    Karra, Udayarka; Huang, Guoxian; Umaz, Ridvan; Tenaglier, Christopher; Wang, Lei; Li, Baikun

    2013-09-01

    A novel and robust distributed benthic microbial fuel cell (DBMFC) was developed to address the energy supply issues for oceanographic sensor network applications, especially under scouring and bioturbation by aquatic life. Multi-anode/cathode configuration was employed in the DBMFC system for enhanced robustness and stability in the harsh ocean environment. The results showed that the DBMFC system achieved peak power and current densities of 190mW/m(2) and 125mA/m(2) respectively. Stability characterization tests indicated the DBMFC with multiple anodes achieved higher power generation over the systems with single anode. A computational model that integrated physical, electrochemical and biological factors of MFCs was developed to validate the overall performance of the DBMFC system. The model simulation well corresponded with the experimental results, and confirmed the hypothesis that using a multi anode/cathode MFC configuration results in reliable and robust power generation. Published by Elsevier Ltd.

  5. An Analysis of Plan Robustness for Esophageal Tumors: Comparing Volumetric Modulated Arc Therapy Plans and Spot Scanning Proton Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Samantha, E-mail: samantha.warren@oncology.ox.ac.uk; Partridge, Mike; Bolsi, Alessandra

    Purpose: Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. Methods andmore » Materials: For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV){sub 50Gy} or PTV{sub 62.5Gy} (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose–volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. Results: SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D{sub 98} was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D{sub 98} was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D{sub 98} was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D{sub 98} was lower by 0.3% to 2.2% of the prescribed GTV dose. Conclusions: The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial.« less

  6. An Analysis of Plan Robustness for Esophageal Tumors: Comparing Volumetric Modulated Arc Therapy Plans and Spot Scanning Proton Planning

    PubMed Central

    Warren, Samantha; Partridge, Mike; Bolsi, Alessandra; Lomax, Anthony J.; Hurt, Chris; Crosby, Thomas; Hawkins, Maria A.

    2016-01-01

    Purpose Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. Methods and Materials For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV)50Gy or PTV62.5Gy (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose–volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. Results SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D98 was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D98 was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D98 was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D98 was lower by 0.3% to 2.2% of the prescribed GTV dose. Conclusions The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial. PMID:27084641

  7. Safe Maneuvering Envelope Estimation Based on a Physical Approach

    NASA Technical Reports Server (NTRS)

    Lombaerts, Thomas J. J.; Schuet, Stefan R.; Wheeler, Kevin R.; Acosta, Diana; Kaneshige, John T.

    2013-01-01

    This paper discusses a computationally efficient algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. This approach differs from others since it is physically inspired. This more transparent approach allows interpreting data in each step, and it is assumed that these physical models based upon flight dynamics theory will therefore facilitate certification for future real life applications.

  8. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  9. Robust image matching via ORB feature and VFC for mismatch removal

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Fu, Wenxing; Fang, Bin; Hu, Fangyu; Quan, Siwen; Ma, Jie

    2018-03-01

    Image matching is at the base of many image processing and computer vision problems, such as object recognition or structure from motion. Current methods rely on good feature descriptors and mismatch removal strategies for detection and matching. In this paper, we proposed a robust image match approach based on ORB feature and VFC for mismatch removal. ORB (Oriented FAST and Rotated BRIEF) is an outstanding feature, it has the same performance as SIFT with lower cost. VFC (Vector Field Consensus) is a state-of-the-art mismatch removing method. The experiment results demonstrate that our method is efficient and robust.

  10. Risk of poisoning in children and adolescents with ADHD: a systematic review and meta-analysis.

    PubMed

    Ruiz-Goikoetxea, Maite; Cortese, Samuele; Magallón, Sara; Aznárez-Sanado, Maite; Álvarez Zallo, Noelia; Luis, Elkin O; de Castro-Manglano, Pilar; Soutullo, Cesar; Arrondo, Gonzalo

    2018-05-15

    Poisoning, a subtype of physical injury, is an important hazard in children and youth. Individuals with ADHD may be at higher risk of poisoning. Here, we conducted a systematic review and meta-analysis to quantify this risk. Furthermore, since physical injuries, likely share causal mechanisms with those of poisoning, we compared the relative risk of poisoning and injuries pooling studies reporting both. As per our pre-registered protocol (PROSPERO ID CRD42017079911), we searched 114 databases through November 2017. From a pool of 826 potentially relevant references, screened independently by two researchers, nine studies (84,756 individuals with and 1,398,946 without the disorder) were retained. We pooled hazard and odds ratios using Robust Variance Estimation, a meta-analytic method aimed to deal with non-independence of outcomes. We found that ADHD is associated with a significantly higher risk of poisoning (Relative Risk = 3.14, 95% Confidence Interval = 2.23 to 4.42). Results also indicated that the relative risk of poisoning is significantly higher than that of physical injuries when comparing individuals with and without ADHD (Beta coefficient = 0.686, 95% Confidence Interval = 0.166 to 1.206). These findings should inform clinical guidelines and public health programs aimed to reduce physical risks in children/adolescents with ADHD.

  11. A positivity-preserving, implicit defect-correction multigrid method for turbulent combustion

    NASA Astrophysics Data System (ADS)

    Wasserman, M.; Mor-Yossef, Y.; Greenberg, J. B.

    2016-07-01

    A novel, robust multigrid method for the simulation of turbulent and chemically reacting flows is developed. A survey of previous attempts at implementing multigrid for the problems at hand indicated extensive use of artificial stabilization to overcome numerical instability arising from non-linearity of turbulence and chemistry model source-terms, small-scale physics of combustion, and loss of positivity. These issues are addressed in the current work. The highly stiff Reynolds-averaged Navier-Stokes (RANS) equations, coupled with turbulence and finite-rate chemical kinetics models, are integrated in time using the unconditionally positive-convergent (UPC) implicit method. The scheme is successfully extended in this work for use with chemical kinetics models, in a fully-coupled multigrid (FC-MG) framework. To tackle the degraded performance of multigrid methods for chemically reacting flows, two major modifications are introduced with respect to the basic, Full Approximation Storage (FAS) approach. First, a novel prolongation operator that is based on logarithmic variables is proposed to prevent loss of positivity due to coarse-grid corrections. Together with the extended UPC implicit scheme, the positivity-preserving prolongation operator guarantees unconditional positivity of turbulence quantities and species mass fractions throughout the multigrid cycle. Second, to improve the coarse-grid-correction obtained in localized regions of high chemical activity, a modified defect correction procedure is devised, and successfully applied for the first time to simulate turbulent, combusting flows. The proposed modifications to the standard multigrid algorithm create a well-rounded and robust numerical method that provides accelerated convergence, while unconditionally preserving the positivity of model equation variables. Numerical simulations of various flows involving premixed combustion demonstrate that the proposed MG method increases the efficiency by a factor of up to eight times with respect to an equivalent single-grid method, and by two times with respect to an artificially-stabilized MG method.

  12. Robust Economic Control Decision Method of Uncertain System on Urban Domestic Water Supply.

    PubMed

    Li, Kebai; Ma, Tianyi; Wei, Guo

    2018-03-31

    As China quickly urbanizes, urban domestic water generally presents the circumstances of both rising tendency and seasonal cycle fluctuation. A robust economic control decision method for dynamic uncertain systems is proposed in this paper. It is developed based on the internal model principle and pole allocation method, and it is applied to an urban domestic water supply system with rising tendency and seasonal cycle fluctuation. To achieve this goal, first a multiplicative model is used to describe the urban domestic water demand. Then, a capital stock and a labor stock are selected as the state vector, and the investment and labor are designed as the control vector. Next, the compensator subsystem is devised in light of the internal model principle. Finally, by using the state feedback control strategy and pole allocation method, the multivariable robust economic control decision method is implemented. The implementation with this model can accomplish the urban domestic water supply control goal, with the robustness for the variation of parameters. The methodology presented in this study may be applied to the water management system in other parts of the world, provided all data used in this study are available. The robust control decision method in this paper is also applicable to deal with tracking control problems as well as stabilization control problems of other general dynamic uncertain systems.

  13. Robust Economic Control Decision Method of Uncertain System on Urban Domestic Water Supply

    PubMed Central

    Li, Kebai; Ma, Tianyi; Wei, Guo

    2018-01-01

    As China quickly urbanizes, urban domestic water generally presents the circumstances of both rising tendency and seasonal cycle fluctuation. A robust economic control decision method for dynamic uncertain systems is proposed in this paper. It is developed based on the internal model principle and pole allocation method, and it is applied to an urban domestic water supply system with rising tendency and seasonal cycle fluctuation. To achieve this goal, first a multiplicative model is used to describe the urban domestic water demand. Then, a capital stock and a labor stock are selected as the state vector, and the investment and labor are designed as the control vector. Next, the compensator subsystem is devised in light of the internal model principle. Finally, by using the state feedback control strategy and pole allocation method, the multivariable robust economic control decision method is implemented. The implementation with this model can accomplish the urban domestic water supply control goal, with the robustness for the variation of parameters. The methodology presented in this study may be applied to the water management system in other parts of the world, provided all data used in this study are available. The robust control decision method in this paper is also applicable to deal with tracking control problems as well as stabilization control problems of other general dynamic uncertain systems. PMID:29614749

  14. Simulation of process identification and controller tuning for flow control system

    NASA Astrophysics Data System (ADS)

    Chew, I. M.; Wong, F.; Bono, A.; Wong, K. I.

    2017-06-01

    PID controller is undeniably the most popular method used in controlling various industrial processes. The feature to tune the three elements in PID has allowed the controller to deal with specific needs of the industrial processes. This paper discusses the three elements of control actions and improving robustness of controllers through combination of these control actions in various forms. A plant model is simulated using the Process Control Simulator in order to evaluate the controller performance. At first, the open loop response of the plant is studied by applying a step input to the plant and collecting the output data from the plant. Then, FOPDT of physical model is formed by using both Matlab-Simulink and PRC method. Then, calculation of controller’s setting is performed to find the values of Kc and τi that will give satisfactory control in closed loop system. Then, the performance analysis of closed loop system is obtained by set point tracking analysis and disturbance rejection performance. To optimize the overall physical system performance, a refined tuning of PID or detuning is further conducted to ensure a consistent resultant output of closed loop system reaction to the set point changes and disturbances to the physical model. As a result, the PB = 100 (%) and τi = 2.0 (s) is preferably chosen for setpoint tracking while PB = 100 (%) and τi = 2.5 (s) is selected for rejecting the imposed disturbance to the model. In a nutshell, selecting correlation tuning values is likewise depended on the required control’s objective for the stability performance of overall physical model.

  15. Black Hole Mergers as Probes of Structure Formation

    NASA Technical Reports Server (NTRS)

    Alicea-Munoz, E.; Miller, M. Coleman

    2008-01-01

    Intense structure formation and reionization occur at high redshift, yet there is currently little observational information about this very important epoch. Observations of gravitational waves from massive black hole (MBH) mergers can provide us with important clues about the formation of structures in the early universe. Past efforts have been limited to calculating merger rates using different models in which many assumptions are made about the specific values of physical parameters of the mergers, resulting in merger rate estimates that span a very wide range (0.1 - 104 mergers/year). Here we develop a semi-analytical, phenomenological model of MBH mergers that includes plausible combinations of several physical parameters, which we then turn around to determine how well observations with the Laser Interferometer Space Antenna (LISA) will be able to enhance our understanding of the universe during the critical z 5 - 30 structure formation era. We do this by generating synthetic LISA observable data (total BH mass, BH mass ratio, redshift, merger rates), which are then analyzed using a Markov Chain Monte Carlo method. This allows us to constrain the physical parameters of the mergers. We find that our methodology works well at estimating merger parameters, consistently giving results within 1- of the input parameter values. We also discover that the number of merger events is a key discriminant among models. This helps our method be robust against observational uncertainties. Our approach, which at this stage constitutes a proof of principle, can be readily extended to physical models and to more general problems in cosmology and gravitational wave astrophysics.

  16. Close-range laser scanning in forests: towards physically based semantics across scales.

    PubMed

    Morsdorf, F; Kükenbrink, D; Schneider, F D; Abegg, M; Schaepman, M E

    2018-04-06

    Laser scanning with its unique measurement concept holds the potential to revolutionize the way we assess and quantify three-dimensional vegetation structure. Modern laser systems used at close range, be it on terrestrial, mobile or unmanned aerial platforms, provide dense and accurate three-dimensional data whose information just waits to be harvested. However, the transformation of such data to information is not as straightforward as for airborne and space-borne approaches, where typically empirical models are built using ground truth of target variables. Simpler variables, such as diameter at breast height, can be readily derived and validated. More complex variables, e.g. leaf area index, need a thorough understanding and consideration of the physical particularities of the measurement process and semantic labelling of the point cloud. Quantified structural models provide a framework for such labelling by deriving stem and branch architecture, a basis for many of the more complex structural variables. The physical information of the laser scanning process is still underused and we show how it could play a vital role in conjunction with three-dimensional radiative transfer models to shape the information retrieval methods of the future. Using such a combined forward and physically based approach will make methods robust and transferable. In addition, it avoids replacing observer bias from field inventories with instrument bias from different laser instruments. Still, an intensive dialogue with the users of the derived information is mandatory to potentially re-design structural concepts and variables so that they profit most of the rich data that close-range laser scanning provides.

  17. Robust phase retrieval of complex-valued object in phase modulation by hybrid Wirtinger flow method

    NASA Astrophysics Data System (ADS)

    Wei, Zhun; Chen, Wen; Yin, Tiantian; Chen, Xudong

    2017-09-01

    This paper presents a robust iterative algorithm, known as hybrid Wirtinger flow (HWF), for phase retrieval (PR) of complex objects from noisy diffraction intensities. Numerical simulations indicate that the HWF method consistently outperforms conventional PR methods in terms of both accuracy and convergence rate in multiple phase modulations. The proposed algorithm is also more robust to low oversampling ratios, loose constraints, and noisy environments. Furthermore, compared with traditional Wirtinger flow, sample complexity is largely reduced. It is expected that the proposed HWF method will find applications in the rapidly growing coherent diffractive imaging field for high-quality image reconstruction with multiple modulations, as well as other disciplines where PR is needed.

  18. RSRE: RNA structural robustness evaluator

    PubMed Central

    Shu, Wenjie; Zheng, Zhiqiang; Wang, Shengqi

    2007-01-01

    Biological robustness, defined as the ability to maintain stable functioning in the face of various perturbations, is an important and fundamental topic in current biology, and has become a focus of numerous studies in recent years. Although structural robustness has been explored in several types of RNA molecules, the origins of robustness are still controversial. Computational analysis results are needed to make up for the lack of evidence of robustness in natural biological systems. The RNA structural robustness evaluator (RSRE) web server presented here provides a freely available online tool to quantitatively evaluate the structural robustness of RNA based on the widely accepted definition of neutrality. Several classical structure comparison methods are employed; five randomization methods are implemented to generate control sequences; sub-optimal predicted structures can be optionally utilized to mitigate the uncertainty of secondary structure prediction. With a user-friendly interface, the web application is easy to use. Intuitive illustrations are provided along with the original computational results to facilitate analysis. The RSRE will be helpful in the wide exploration of RNA structural robustness and will catalyze our understanding of RNA evolution. The RSRE web server is freely available at http://biosrv1.bmi.ac.cn/RSRE/ or http://biotech.bmi.ac.cn/RSRE/. PMID:17567615

  19. Guaranteeing robustness of structural condition monitoring to environmental variability

    NASA Astrophysics Data System (ADS)

    Van Buren, Kendra; Reilly, Jack; Neal, Kyle; Edwards, Harry; Hemez, François

    2017-01-01

    Advances in sensor deployment and computational modeling have allowed significant strides to be recently made in the field of Structural Health Monitoring (SHM). One widely used SHM strategy is to perform a vibration analysis where a model of the structure's pristine (undamaged) condition is compared with vibration response data collected from the physical structure. Discrepancies between model predictions and monitoring data can be interpreted as structural damage. Unfortunately, multiple sources of uncertainty must also be considered in the analysis, including environmental variability, unknown model functional forms, and unknown values of model parameters. Not accounting for these sources of uncertainty can lead to false-positives or false-negatives in the structural condition assessment. To manage the uncertainty, we propose a robust SHM methodology that combines three technologies. A time series algorithm is trained using "baseline" data to predict the vibration response, compare predictions to actual measurements collected on a potentially damaged structure, and calculate a user-defined damage indicator. The second technology handles the uncertainty present in the problem. An analysis of robustness is performed to propagate this uncertainty through the time series algorithm and obtain the corresponding bounds of variation of the damage indicator. The uncertainty description and robustness analysis are both inspired by the theory of info-gap decision-making. Lastly, an appropriate "size" of the uncertainty space is determined through physical experiments performed in laboratory conditions. Our hypothesis is that examining how the uncertainty space changes throughout time might lead to superior diagnostics of structural damage as compared to only monitoring the damage indicator. This methodology is applied to a portal frame structure to assess if the strategy holds promise for robust SHM. (Publication approved for unlimited, public release on October-28-2015, LA-UR-15-28442, unclassified.)

  20. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.

    2015-01-12

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However,more » the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.« less

  1. Anxiety Sensitivity and Cannabis Use-Related Problems: The Impact of Race

    PubMed Central

    Dean, Kimberlye E.; Ecker, Anthony H.; Buckner, Julia D.

    2017-01-01

    Background and Objectives Cannabis is the most widely used illicit substance among young adults. Anxiety sensitivity (AS; i.e., fear of anxiety-related symptoms) is positively related to coping motives for cannabis use (which are robustly positively linked to cannabis-related problems). However, AS is unrelated to cannabis use-related problems. Yet, extant studies have been conducted on primarily White samples. It may be that among Black students, AS-physical concerns (i.e., fear of physical anxiety-related sensations) are related to cannabis problems given that Black individuals are more likely than White individuals to report experiencing greater and more intense somatic symptoms when experiencing anxiety. Black individuals may rely on cannabis to cope with fear of these somatic symptoms, continuing to use despite cannabis-related problems. Methods The current study tested whether race moderated the relation between AS-physical concerns and cannabis problems among 102 (85.3% female) current cannabis using undergraduates who were either non-Hispanic Black (n= 51) or non-Hispanic White (n= 51). Results After controlling for frequency of cannabis use, income, and gender, race significantly moderated the relation between AS-physical concerns and cannabis use-related problems such that AS-physical concerns significantly predicted cannabis-related problems among Black and not White individuals. Discussion and Conclusions Findings highlight the importance of considering race in identifying psychosocial predictors of cannabis-related problems. Scientific Significance Intervention strategies for Black cannabis users may benefit from examining and targeting AS-physical concerns. PMID:28295843

  2. Impact of Physical Activity Interventions on Blood Pressure in Brazilian Populations

    PubMed Central

    Bento, Vivian Freitas Rezende; Albino, Flávia Barbizan; de Moura, Karen Fernandes; Maftum, Gustavo Jorge; dos Santos, Mauro de Castro; Guarita-Souza, Luiz César; Faria Neto, José Rocha; Baena, Cristina Pellegrino

    2015-01-01

    Background High blood pressure is associated with cardiovascular disease, which is the leading cause of mortality in the Brazilian population. Lifestyle changes, including physical activity, are important for lowering blood pressure levels and decreasing the costs associated with outcomes. Objective Assess the impact of physical activity interventions on blood pressure in Brazilian individuals. Methods Meta-analysis and systematic review of studies published until May 2014, retrieved from several health sciences databases. Seven studies with 493 participants were included. The analysis included parallel studies of physical activity interventions in adult populations in Brazil with a description of blood pressure (mmHg) before and after the intervention in the control and intervention groups. Results Of 390 retrieved studies, eight matched the proposed inclusion criteria for the systematic review and seven randomized clinical trials were included in the meta-analysis. Physical activity interventions included aerobic and resistance exercises. There was a reduction of -10.09 (95% CI: -18.76 to -1.43 mmHg) in the systolic and -7.47 (95% CI: -11.30 to -3.63 mmHg) in the diastolic blood pressure. Conclusions Available evidence on the effects of physical activity on blood pressure in the Brazilian population shows a homogeneous and significant effect at both systolic and diastolic blood pressures. However, the strength of the included studies was low and the methodological quality was also low and/or regular. Larger studies with more rigorous methodology are necessary to build robust evidence. PMID:26016783

  3. Robust estimation for ordinary differential equation models.

    PubMed

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.

  4. Closed-loop and robust control of quantum systems.

    PubMed

    Chen, Chunlin; Wang, Lin-Cheng; Wang, Yuanlong

    2013-01-01

    For most practical quantum control systems, it is important and difficult to attain robustness and reliability due to unavoidable uncertainties in the system dynamics or models. Three kinds of typical approaches (e.g., closed-loop learning control, feedback control, and robust control) have been proved to be effective to solve these problems. This work presents a self-contained survey on the closed-loop and robust control of quantum systems, as well as a brief introduction to a selection of basic theories and methods in this research area, to provide interested readers with a general idea for further studies. In the area of closed-loop learning control of quantum systems, we survey and introduce such learning control methods as gradient-based methods, genetic algorithms (GA), and reinforcement learning (RL) methods from a unified point of view of exploring the quantum control landscapes. For the feedback control approach, the paper surveys three control strategies including Lyapunov control, measurement-based control, and coherent-feedback control. Then such topics in the field of quantum robust control as H(∞) control, sliding mode control, quantum risk-sensitive control, and quantum ensemble control are reviewed. The paper concludes with a perspective of future research directions that are likely to attract more attention.

  5. Robust EM Continual Reassessment Method in Oncology Dose Finding

    PubMed Central

    Yuan, Ying; Yin, Guosheng

    2012-01-01

    The continual reassessment method (CRM) is a commonly used dose-finding design for phase I clinical trials. Practical applications of this method have been restricted by two limitations: (1) the requirement that the toxicity outcome needs to be observed shortly after the initiation of the treatment; and (2) the potential sensitivity to the prespecified toxicity probability at each dose. To overcome these limitations, we naturally treat the unobserved toxicity outcomes as missing data, and use the expectation-maximization (EM) algorithm to estimate the dose toxicity probabilities based on the incomplete data to direct dose assignment. To enhance the robustness of the design, we propose prespecifying multiple sets of toxicity probabilities, each set corresponding to an individual CRM model. We carry out these multiple CRMs in parallel, across which model selection and model averaging procedures are used to make more robust inference. We evaluate the operating characteristics of the proposed robust EM-CRM designs through simulation studies and show that the proposed methods satisfactorily resolve both limitations of the CRM. Besides improving the MTD selection percentage, the new designs dramatically shorten the duration of the trial, and are robust to the prespecification of the toxicity probabilities. PMID:22375092

  6. Robust and Blind 3D Mesh Watermarking in Spatial Domain Based on Faces Categorization and Sorting

    NASA Astrophysics Data System (ADS)

    Molaei, Amir Masoud; Ebrahimnezhad, Hossein; Sedaaghi, Mohammad Hossein

    2016-06-01

    In this paper, a 3D watermarking algorithm in spatial domain is presented with blind detection. In the proposed method, a negligible visual distortion is observed in host model. Initially, a preprocessing is applied on the 3D model to make it robust against geometric transformation attacks. Then, a number of triangle faces are determined as mark triangles using a novel systematic approach in which faces are categorized and sorted robustly. In order to enhance the capability of information retrieval by attacks, block watermarks are encoded using Reed-Solomon block error-correcting code before embedding into the mark triangles. Next, the encoded watermarks are embedded in spherical coordinates. The proposed method is robust against additive noise, mesh smoothing and quantization attacks. Also, it is stout next to geometric transformation, vertices and faces reordering attacks. Moreover, the proposed algorithm is designed so that it is robust against the cropping attack. Simulation results confirm that the watermarked models confront very low distortion if the control parameters are selected properly. Comparison with other methods demonstrates that the proposed method has good performance against the mesh smoothing attacks.

  7. Robustness of fit indices to outliers and leverage observations in structural equation modeling.

    PubMed

    Yuan, Ke-Hai; Zhong, Xiaoling

    2013-06-01

    Normal-distribution-based maximum likelihood (NML) is the most widely used method in structural equation modeling (SEM), although practical data tend to be nonnormally distributed. The effect of nonnormally distributed data or data contamination on the normal-distribution-based likelihood ratio (LR) statistic is well understood due to many analytical and empirical studies. In SEM, fit indices are used as widely as the LR statistic. In addition to NML, robust procedures have been developed for more efficient and less biased parameter estimates with practical data. This article studies the effect of outliers and leverage observations on fit indices following NML and two robust methods. Analysis and empirical results indicate that good leverage observations following NML and one of the robust methods lead most fit indices to give more support to the substantive model. While outliers tend to make a good model superficially bad according to many fit indices following NML, they have little effect on those following the two robust procedures. Implications of the results to data analysis are discussed, and recommendations are provided regarding the use of estimation methods and interpretation of fit indices. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  8. Robust numerical solution of the reservoir routing equation

    NASA Astrophysics Data System (ADS)

    Fiorentini, Marcello; Orlandini, Stefano

    2013-09-01

    The robustness of numerical methods for the solution of the reservoir routing equation is evaluated. The methods considered in this study are: (1) the Laurenson-Pilgrim method, (2) the fourth-order Runge-Kutta method, and (3) the fixed order Cash-Karp method. Method (1) is unable to handle nonmonotonic outflow rating curves. Method (2) is found to fail under critical conditions occurring, especially at the end of inflow recession limbs, when large time steps (greater than 12 min in this application) are used. Method (3) is computationally intensive and it does not solve the limitations of method (2). The limitations of method (2) can be efficiently overcome by reducing the time step in the critical phases of the simulation so as to ensure that water level remains inside the domains of the storage function and the outflow rating curve. The incorporation of a simple backstepping procedure implementing this control into the method (2) yields a robust and accurate reservoir routing method that can be safely used in distributed time-continuous catchment models.

  9. Curvilinear immersed-boundary method for simulating unsteady flows in shallow natural streams with arbitrarily complex obstacles

    NASA Astrophysics Data System (ADS)

    Kang, Seokkoo; Borazjani, Iman; Sotiropoulos, Fotis

    2008-11-01

    Unsteady 3D simulations of flows in natural streams is a challenging task due to the complexity of the bathymetry, the shallowness of the flow, and the presence of multiple nature- and man-made obstacles. This work is motivated by the need to develop a powerful numerical method for simulating such flows using coherent-structure-resolving turbulence models. We employ the curvilinear immersed boundary method of Ge and Sotiropoulos (Journal of Computational Physics, 2007) and address the critical issue of numerical efficiency in large aspect ratio computational domains and grids such as those encountered in long and shallow open channels. We show that the matrix-free Newton-Krylov method for solving the momentum equations coupled with an algebraic multigrid method with incomplete LU preconditioner for solving the Poisson equation yield a robust and efficient procedure for obtaining time-accurate solutions in such problems. We demonstrate the potential of the numerical approach by carrying out a direct numerical simulation of flow in a long and shallow meandering stream with multiple hydraulic structures.

  10. Profile Optimization Method for Robust Airfoil Shape Optimization in Viscous Flow

    NASA Technical Reports Server (NTRS)

    Li, Wu

    2003-01-01

    Simulation results obtained by using FUN2D for robust airfoil shape optimization in transonic viscous flow are included to show the potential of the profile optimization method for generating fairly smooth optimal airfoils with no off-design performance degradation.

  11. TEMPLATES: Targeting Extremely Magnified Panchromatic Lensed Arcs and Their Extended Star Formation

    NASA Astrophysics Data System (ADS)

    Rigby, Jane; Vieira, Joaquin; Bayliss, M.; Fischer, T.; Florian, M.; Gladders, M.; Gonzalez, A.; Law, D.; Marrone, D.; Phadke, K.; Sharon, K.; Spilker, J.

    2017-11-01

    We propose high signal-to-noise NIRSpec and MIRI IFU spectroscopy, with accompanying imaging, for 4 gravitationally lensed galaxies at 1

  12. A Novel Face-on-Face Contact Method for Nonlinear Solid Mechanics

    NASA Astrophysics Data System (ADS)

    Wopschall, Steven Robert

    The implicit solution to contact problems in nonlinear solid mechanics poses many difficulties. Traditional node-to-segment methods may suffer from locking and experience contact force chatter in the presence of sliding. More recent developments include mortar based methods, which resolve local contact interactions over face-pairs and feature a kinematic constraint in integral form that smoothes contact behavior, especially in the presence of sliding. These methods have been shown to perform well in the presence of geometric nonlinearities and are demonstratively more robust than node-to-segment methods. These methods are typically biased, however, interpolating contact tractions and gap equations on a designated non-mortar face, which leads to an asymmetry in the formulation. Another challenge is constraint enforcement. The general selection of the active set of constraints is brought with difficulty, often leading to non-physical solutions and easily resulting in missed face-pair interactions. Details on reliable constraint enforcement methods are lacking in the greater contact literature. This work presents an unbiased contact formulation utilizing a median-plane methodology. Up to linear polynomials are used for the discrete pressure representation and integral gap constraints are enforced using a novel subcycling procedure. This procedure reliably determines the active set of contact constraints leading to physical and kinematically admissible solutions void of heuristics and user action. The contact method presented herein successfully solves difficult quasi-static contact problems in the implicit computational setting. These problems feature finite deformations, material nonlinearity, and complex interface geometries, all of which are challenging characteristics for contact implementations and constraint enforcement algorithms. The subcycling procedure is a key feature of this method, handling active constraint selection for complex interfaces and mesh geometries.

  13. Spectrodirectional Investigation of a Geometric-Optical Canopy Reflectance Model by Laboratory Simulation

    NASA Astrophysics Data System (ADS)

    Stanford, Adam Christopher

    Canopy reflectance models (CRMs) can accurately estimate vegetation canopy biophysical-structural information such as Leaf Area Index (LAI) inexpensively using satellite imagery. The strict physical basis which geometric-optical CRMs employ to mathematically link canopy bidirectional reflectance and structure allows for the tangible replication of a CRM's geometric abstraction of a canopy in the laboratory, enabling robust CRM validation studies. To this end, the ULGS-2 goniometer was used to obtain multiangle, hyperspectral (Spectrodirectional) measurements of a specially-designed tangible physical model forest, developed based upon the Geometric-Optical Mutual Shadowing (GOMS) CRM, at three different canopy cover densities. GOMS forward-modelled reflectance values had high levels of agreement with ULGS-2 measurements, with obtained reflectance RMSE values ranging from 0.03% to 0.1%. Canopy structure modelled via GOMS Multiple-Forward-Mode (MFM) inversion had varying levels of success. The methods developed in this thesis can potentially be extended to more complex CRMs through the implementation of 3D printing.

  14. SAFSIM theory manual: A computer program for the engineering simulation of flow systems

    NASA Astrophysics Data System (ADS)

    Dobranich, Dean

    1993-12-01

    SAFSIM (System Analysis Flow SIMulator) is a FORTRAN computer program for simulating the integrated performance of complex flow systems. SAFSIM provides sufficient versatility to allow the engineering simulation of almost any system, from a backyard sprinkler system to a clustered nuclear reactor propulsion system. In addition to versatility, speed and robustness are primary SAFSIM development goals. SAFSIM contains three basic physics modules: (1) a fluid mechanics module with flow network capability; (2) a structure heat transfer module with multiple convection and radiation exchange surface capability; and (3) a point reactor dynamics module with reactivity feedback and decay heat capability. Any or all of the physics modules can be implemented, as the problem dictates. SAFSIM can be used for compressible and incompressible, single-phase, multicomponent flow systems. Both the fluid mechanics and structure heat transfer modules employ a one-dimensional finite element modeling approach. This document contains a description of the theory incorporated in SAFSIM, including the governing equations, the numerical methods, and the overall system solution strategies.

  15. Preparation of Cobalt-Based Electrodes by Physical Vapor Deposition on Various Nonconductive Substrates for Electrocatalytic Water Oxidation.

    PubMed

    Wu, Yizhen; Wang, Le; Chen, Mingxing; Jin, Zhaoxia; Zhang, Wei; Cao, Rui

    2017-12-08

    Artificial photosynthesis requires efficient anodic electrode materials for water oxidation. Cobalt metal thin films are prepared through facile physical vapor deposition (PVD) on various nonconductive substrates, including regular and quartz glass, mica sheet, polyimide, and polyethylene terephthalate (PET). Subsequent surface electrochemical modification by cyclic voltammetry (CV) renders these films active for electrocatalytic water oxidation, reaching a current density of 10 mA cm -2 at a low overpotential of 330 mV in 1.0 m KOH solution. These electrodes are robust with unchanged activity throughout prolonged chronopotentiometry measurements. This work is thus significant to show that the combination of PVD and CV is very valuable and convenient to fabricate active electrodes on various nonconductive substrates, particularly with flexible polyimide and PET substrates. This efficient, safe and convenient method can potentially be expanded to many other electrochemical applications. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Physics and Robotic Sensing -- the good, the bad, and approaches to making it work

    NASA Astrophysics Data System (ADS)

    Huff, Brian

    2011-03-01

    All of the technological advances that have benefited consumer electronics have direct application to robotics. Technological advances have resulted in the dramatic reduction in size, cost, and weight of computing systems, while simultaneously doubling computational speed every eighteen months. The same manufacturing advancements that have enabled this rapid increase in computational power are now being leveraged to produce small, powerful and cost-effective sensing technologies applicable for use in mobile robotics applications. Despite the increase in computing and sensing resources available to today's robotic systems developers, there are sensing problems typically found in unstructured environments that continue to frustrate the widespread use of robotics and unmanned systems. This talk presents how physics has contributed to the creation of the technologies that are making modern robotics possible. The talk discusses theoretical approaches to robotic sensing that appear to suffer when they are deployed in the real world. Finally the author presents methods being used to make robotic sensing more robust.

  17. Secure communications using nonlinear silicon photonic keys.

    PubMed

    Grubel, Brian C; Bosworth, Bryan T; Kossey, Michael R; Cooper, A Brinton; Foster, Mark A; Foster, Amy C

    2018-02-19

    We present a secure communication system constructed using pairs of nonlinear photonic physical unclonable functions (PUFs) that harness physical chaos in integrated silicon micro-cavities. Compared to a large, electronically stored one-time pad, our method provisions large amounts of information within the intrinsically complex nanostructure of the micro-cavities. By probing a micro-cavity with a rapid sequence of spectrally-encoded ultrafast optical pulses and measuring the lightwave responses, we experimentally demonstrate the ability to extract 2.4 Gb of key material from a single micro-cavity device. Subsequently, in a secure communication experiment with pairs of devices, we achieve bit error rates below 10 -5 at code rates of up to 0.1. The PUFs' responses are never transmitted over the channel or stored in digital memory, thus enhancing the security of the system. Additionally, the micro-cavity PUFs are extremely small, inexpensive, robust, and fully compatible with telecommunications infrastructure, components, and electronic fabrication. This approach can serve one-time pad or public key exchange applications where high security is required.

  18. Stand alone, low current measurements on possible sensing platforms via Arduino Uno microcontroller with modified commercially available sensors

    NASA Astrophysics Data System (ADS)

    Tanner, Meghan; Henson, Gabriel; Senevirathne, Indrajith

    Advent of cost-effective solid-state sensors has spurred an immense interest in microcontrollers, in particular Arduino microcontrollers. These include serious engineering and physical science applications due to their versatility and robustness. An Arduino microcontroller coupled with a commercially available sensor has been used to methodically measure, record, and explore low currents, low voltages, and corresponding dissipated power towards assessing secondary physical properties in a select set of engineered systems. System was assembled via breadboard, wire, and simple soldering with an Arduino Uno with ATmega328P microcontroller connected to a PC. The microcontroller was programmed with Arduino software while the bootloader was used to upload the code. High-side measurement INA169 current shunt monitor was used to measure corresponding low to ultra-low currents and voltages. A collection of measurements was obtained via the sensor and was compared with measurements from standardized devices to assess reliability and uncertainty. Some sensors were modified/hacked to improve the sensitivity of the measurements.

  19. Galaxy interactions and strength of nuclear activity

    NASA Technical Reports Server (NTRS)

    Simkin, S. M.

    1990-01-01

    Analysis of data in the literature for differential velocities and projected separations of nearby Seyfert galaxies with possible companions shows a clear difference in projected separations between type 1's and type 2's. This kinematic difference between the two activity classes reinforces other independent evidence that their different nuclear characteristics are related to a non-nuclear physical distinction between the two classes. The differential velocities and projected separations of the galaxy pairs in this sample yield mean galaxy masses, sizes, and mass to light ratios which are consistent with those found by the statistical methods of Karachentsev. Although the galaxy sample discussed here is too small and too poorly defined to provide robust support for these conclusions, the results strongly suggest that nuclear activity in Seyfert galaxies is associated with gravitational perturbations from companion galaxies, and that there are physical distinctions between the host companions of Seyfert 1 and Seyfert 2 nuclei which may depend both on the environment and the structure of the host galaxy itself.

  20. The cost of changing physical activity behaviour: evidence from a "physical activity pathway" in the primary care setting

    PubMed Central

    2011-01-01

    Background The 'Physical Activity Care Pathway' (a Pilot for the 'Let's Get Moving' policy) is a systematic approach to integrating physical activity promotion into the primary care setting. It combines several methods reported to support behavioural change, including brief interventions, motivational interviewing, goal setting, providing written resources, and follow-up support. This paper compares costs falling on the UK National Health Service (NHS) of implementing the care pathway using two different recruitment strategies and provides initial insights into the cost of changing physical activity behaviour. Methods A combination of a time driven variant of activity based costing, audit data through EMIS and a survey of practice managers provided patient-level cost data for 411 screened individuals. Self reported physical activity data of 70 people completing the care pathway at three month was compared with baseline using a regression based 'difference in differences' approach. Deterministic and probabilistic sensitivity analyses in combination with hypothesis testing were used to judge how robust findings are to key assumptions and to assess the uncertainty around estimates of the cost of changing physical activity behaviour. Results It cost £53 (SD 7.8) per patient completing the PACP in opportunistic centres and £191 (SD 39) at disease register sites. The completer rate was higher in disease register centres (27.3% vs. 16.2%) and the difference in differences in time spent on physical activity was 81.32 (SE 17.16) minutes/week in patients completing the PACP; so that the incremental cost of converting one sedentary adult to an 'active state' of 150 minutes of moderate intensity physical activity per week amounts to £ 886.50 in disease register practices, compared to opportunistic screening. Conclusions Disease register screening is more costly than opportunistic patient recruitment. However, additional costs come with a higher completion rate and better outcomes in terms of behavioural change in patients completing the care pathway. Further research is needed to rigorously evaluate intervention efficiency and to assess the link between behavioural change and changes in quality adjusted life years (QALYs). PMID:21605400

  1. Numerical solution of the Saint-Venant equations by an efficient hybrid finite-volume/finite-difference method

    NASA Astrophysics Data System (ADS)

    Lai, Wencong; Khan, Abdul A.

    2018-04-01

    A computationally efficient hybrid finite-volume/finite-difference method is proposed for the numerical solution of Saint-Venant equations in one-dimensional open channel flows. The method adopts a mass-conservative finite volume discretization for the continuity equation and a semi-implicit finite difference discretization for the dynamic-wave momentum equation. The spatial discretization of the convective flux term in the momentum equation employs an upwind scheme and the water-surface gradient term is discretized using three different schemes. The performance of the numerical method is investigated in terms of efficiency and accuracy using various examples, including steady flow over a bump, dam-break flow over wet and dry downstream channels, wetting and drying in a parabolic bowl, and dam-break floods in laboratory physical models. Numerical solutions from the hybrid method are compared with solutions from a finite volume method along with analytic solutions or experimental measurements. Comparisons demonstrates that the hybrid method is efficient, accurate, and robust in modeling various flow scenarios, including subcritical, supercritical, and transcritical flows. In this method, the QUICK scheme for the surface slope discretization is more accurate and less diffusive than the center difference and the weighted average schemes.

  2. Some conservation issues for the dynamical cores of NWP and climate models

    NASA Astrophysics Data System (ADS)

    Thuburn, J.

    2008-03-01

    The rationale for designing atmospheric numerical model dynamical cores with certain conservation properties is reviewed. The conceptual difficulties associated with the multiscale nature of realistic atmospheric flow, and its lack of time-reversibility, are highlighted. A distinction is made between robust invariants, which are conserved or nearly conserved in the adiabatic and frictionless limit, and non-robust invariants, which are not conserved in the limit even though they are conserved by exactly adiabatic frictionless flow. For non-robust invariants, a further distinction is made between processes that directly transfer some quantity from large to small scales, and processes involving a cascade through a continuous range of scales; such cascades may either be explicitly parameterized, or handled implicitly by the dynamical core numerics, accepting the implied non-conservation. An attempt is made to estimate the relative importance of different conservation laws. It is argued that satisfactory model performance requires spurious sources of a conservable quantity to be much smaller than any true physical sources; for several conservable quantities the magnitudes of the physical sources are estimated in order to provide benchmarks against which any spurious sources may be measured.

  3. Analysis of Infrared Signature Variation and Robust Filter-Based Supersonic Target Detection

    PubMed Central

    Sun, Sun-Gu; Kim, Kyung-Tae

    2014-01-01

    The difficulty of small infrared target detection originates from the variations of infrared signatures. This paper presents the fundamental physics of infrared target variations and reports the results of variation analysis of infrared images acquired using a long wave infrared camera over a 24-hour period for different types of backgrounds. The detection parameters, such as signal-to-clutter ratio were compared according to the recording time, temperature and humidity. Through variation analysis, robust target detection methodologies are derived by controlling thresholds and designing a temporal contrast filter to achieve high detection rate and low false alarm rate. Experimental results validate the robustness of the proposed scheme by applying it to the synthetic and real infrared sequences. PMID:24672290

  4. Robust Classification and Segmentation of Planar and Linear Features for Construction Site Progress Monitoring and Structural Dimension Compliance Control

    NASA Astrophysics Data System (ADS)

    Maalek, R.; Lichti, D. D.; Ruwanpura, J.

    2015-08-01

    The application of terrestrial laser scanners (TLSs) on construction sites for automating construction progress monitoring and controlling structural dimension compliance is growing markedly. However, current research in construction management relies on the planned building information model (BIM) to assign the accumulated point clouds to their corresponding structural elements, which may not be reliable in cases where the dimensions of the as-built structure differ from those of the planned model and/or the planned model is not available with sufficient detail. In addition outliers exist in construction site datasets due to data artefacts caused by moving objects, occlusions and dust. In order to overcome the aforementioned limitations, a novel method for robust classification and segmentation of planar and linear features is proposed to reduce the effects of outliers present in the LiDAR data collected from construction sites. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a robust clustering method. A method is also proposed to robustly extract the points belonging to the flat-slab floors and/or ceilings without performing the aforementioned stages in order to preserve computational efficiency. The applicability of the proposed method is investigated in two scenarios, namely, a laboratory with 30 million points and an actual construction site with over 150 million points. The results obtained by the two experiments validate the suitability of the proposed method for robust segmentation of planar and linear features in contaminated datasets, such as those collected from construction sites.

  5. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    PubMed Central

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  6. Low cost and efficient kurtosis-based deflationary ICA method: application to MRS sources separation problem.

    PubMed

    Saleh, M; Karfoul, A; Kachenoura, A; Senhadji, L; Albera, L

    2016-08-01

    Improving the execution time and the numerical complexity of the well-known kurtosis-based maximization method, the RobustICA, is investigated in this paper. A Newton-based scheme is proposed and compared to the conventional RobustICA method. A new implementation using the nonlinear Conjugate Gradient one is investigated also. Regarding the Newton approach, an exact computation of the Hessian of the considered cost function is provided. The proposed approaches and the considered implementations inherit the global plane search of the initial RobustICA method for which a better convergence speed for a given direction is still guaranteed. Numerical results on Magnetic Resonance Spectroscopy (MRS) source separation show the efficiency of the proposed approaches notably the quasi-Newton one using the BFGS method.

  7. An improved probabilistic approach for linking progenitor and descendant galaxy populations using comoving number density

    NASA Astrophysics Data System (ADS)

    Wellons, Sarah; Torrey, Paul

    2017-06-01

    Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.

  8. Robust Coefficients Alpha and Omega and Confidence Intervals With Outlying Observations and Missing Data: Methods and Software.

    PubMed

    Zhang, Zhiyong; Yuan, Ke-Hai

    2016-06-01

    Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation methods for alpha and omega often implicitly assume that data are complete and normally distributed. This study proposes robust procedures to estimate both alpha and omega as well as corresponding standard errors and confidence intervals from samples that may contain potential outlying observations and missing values. The influence of outlying observations and missing data on the estimates of alpha and omega is investigated through two simulation studies. Results show that the newly developed robust method yields substantially improved alpha and omega estimates as well as better coverage rates of confidence intervals than the conventional nonrobust method. An R package coefficientalpha is developed and demonstrated to obtain robust estimates of alpha and omega.

  9. A Robust Inner and Outer Loop Control Method for Trajectory Tracking of a Quadrotor

    PubMed Central

    Xia, Dunzhu; Cheng, Limei; Yao, Yanhong

    2017-01-01

    In order to achieve the complicated trajectory tracking of quadrotor, a geometric inner and outer loop control scheme is presented. The outer loop generates the desired rotation matrix for the inner loop. To improve the response speed and robustness, a geometric SMC controller is designed for the inner loop. The outer loop is also designed via sliding mode control (SMC). By Lyapunov theory and cascade theory, the closed-loop system stability is guaranteed. Next, the tracking performance is validated by tracking three representative trajectories. Then, the robustness of the proposed control method is illustrated by trajectory tracking in presence of model uncertainty and disturbances. Subsequently, experiments are carried out to verify the method. In the experiment, ultra wideband (UWB) is used for indoor positioning. Extended Kalman Filter (EKF) is used for fusing inertial measurement unit (IMU) and UWB measurements. The experimental results show the feasibility of the designed controller in practice. The comparative experiments with PD and PD loop demonstrate the robustness of the proposed control method. PMID:28925984

  10. Robust estimation procedure in panel data model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shariff, Nurul Sima Mohamad; Hamzah, Nor Aishah

    2014-06-19

    The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependencemore » is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.« less

  11. Robust Control for Microgravity Vibration Isolation using Fixed Order, Mixed H2/Mu Design

    NASA Technical Reports Server (NTRS)

    Whorton, Mark

    2003-01-01

    Many space-science experiments need an active isolation system to provide a sufficiently quiescent microgravity environment. Modern control methods provide the potential for both high-performance and robust stability in the presence of parametric uncertainties that are characteristic of microgravity vibration isolation systems. While H2 and H(infinity) methods are well established, neither provides the levels of attenuation performance and robust stability in a compensator with low order. Mixed H2/H(infinity), controllers provide a means for maximizing robust stability for a given level of mean-square nominal performance while directly optimizing for controller order constraints. This paper demonstrates the benefit of mixed norm design from the perspective of robustness to parametric uncertainties and controller order for microgravity vibration isolation. A nominal performance metric analogous to the mu measure, for robust stability assessment is also introduced in order to define an acceptable trade space from which different control methodologies can be compared.

  12. SU-E-T-625: Robustness Evaluation and Robust Optimization of IMPT Plans Based on Per-Voxel Standard Deviation of Dose Distributions.

    PubMed

    Liu, W; Mohan, R

    2012-06-01

    Proton dose distributions, IMPT in particular, are highly sensitive to setup and range uncertainties. We report a novel method, based on per-voxel standard deviation (SD) of dose distributions, to evaluate the robustness of proton plans and to robustly optimize IMPT plans to render them less sensitive to uncertainties. For each optimization iteration, nine dose distributions are computed - the nominal one, and one each for ± setup uncertainties along x, y and z axes and for ± range uncertainty. SD of dose in each voxel is used to create SD-volume histogram (SVH) for each structure. SVH may be considered a quantitative representation of the robustness of the dose distribution. For optimization, the desired robustness may be specified in terms of an SD-volume (SV) constraint on the CTV and incorporated as a term in the objective function. Results of optimization with and without this constraint were compared in terms of plan optimality and robustness using the so called'worst case' dose distributions; which are obtained by assigning the lowest among the nine doses to each voxel in the clinical target volume (CTV) and the highest to normal tissue voxels outside the CTV. The SVH curve and the area under it for each structure were used as quantitative measures of robustness. Penalty parameter of SV constraint may be varied to control the tradeoff between robustness and plan optimality. We applied these methods to one case each of H&N and lung. In both cases, we found that imposing SV constraint improved plan robustness but at the cost of normal tissue sparing. SVH-based optimization and evaluation is an effective tool for robustness evaluation and robust optimization of IMPT plans. Studies need to be conducted to test the methods for larger cohorts of patients and for other sites. This research is supported by National Cancer Institute (NCI) grant P01CA021239, the University Cancer Foundation via the Institutional Research Grant program at the University of Texas MD Anderson Cancer Center, and MD Anderson’s cancer center support grant CA016672. © 2012 American Association of Physicists in Medicine.

  13. A review on the mechanical and thermodynamic robustness of superhydrophobic surfaces.

    PubMed

    Scarratt, Liam R J; Steiner, Ullrich; Neto, Chiara

    2017-08-01

    Advancements in the fabrication and study of superhydrophobic surfaces have been significant over the past 10years, and some 20years after the discovery of the lotus effect, the study of special wettability surfaces can be considered mainstream. While the fabrication of superhydrophobic surfaces is well advanced and the physical properties of superhydrophobic surfaces well-understood, the robustness of these surfaces, both in terms of mechanical and thermodynamic properties, are only recently getting attention in the literature. In this review we cover publications that appeared over the past ten years on the thermodynamic and mechanical robustness of superhydrophobic surfaces, by which we mean the long term stability under conditions of wear, shear and pressure. The review is divided into two parts, the first dedicated to thermodynamic robustness and the second dedicated to mechanical robustness of these complex surfaces. Our work is intended as an introductory review for researchers interested in addressing longevity and stability of superhydrophobic surfaces, and provides an outlook on outstanding aspects of investigation. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Evolution of robustness to damage in artificial 3-dimensional development.

    PubMed

    Joachimczak, Michał; Wróbel, Borys

    2012-09-01

    GReaNs is an Artificial Life platform we have built to investigate the general principles that guide evolution of multicellular development and evolution of artificial gene regulatory networks. The embryos develop in GReaNs in a continuous 3-dimensional (3D) space with simple physics. The developmental trajectories are indirectly encoded in linear genomes. The genomes are not limited in size and determine the topology of gene regulatory networks that are not limited in the number of nodes. The expression of the genes is continuous and can be modified by adding environmental noise. In this paper we evolved development of structures with a specific shape (an ellipsoid) and asymmetrical pattering (a 3D pattern inspired by the French flag problem), and investigated emergence of the robustness to damage in development and the emergence of the robustness to noise. Our results indicate that both types of robustness are related, and that including noise during evolution promotes higher robustness to damage. Interestingly, we have observed that some evolved gene regulatory networks rely on noise for proper behaviour. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. Superlinearly scalable noise robustness of redundant coupled dynamical systems.

    PubMed

    Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L

    2016-03-01

    We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.

  16. Robust spherical direct-drive design for NI

    NASA Astrophysics Data System (ADS)

    Masse, Laurent; Hurricane, O.; Michel, P.; Nora, R.; Tabak, M.; Lawrence Livermore Natl Lab Team

    2016-10-01

    Achieving ignition in a direct-drive or indirect-drive cryogenic implosion is a tremendous challenge. Both approaches need to deal with physic and technologic issues. During the past years, the indirect drive effort on the National Ignition Facility (NIF) has revealed unpredicted lost of performances that force to think to more robust designs and to dig into detailed physics aspects. Encouraging results have been obtained using a strong first shock during the implosion of CH ablator ignition capsules. These ``high-foot'' implosion results in a significantly lower ablation Rayleigh-Taylor instability growth than that of the NIC point design capsule. The trade-off with this design is a higher fuel adiabat that limits both fuel compression and theoretical capsule yield. The purpose of designing this capsule is to recover a more ideal one-dimensional implosion that is in closer agreement to simulation predictions. In the same spirit of spending energy on margin, at the coast of decreased performance, we are presenting here a study on ``robust'' spherical direct drive design for NIF. This 2-Shock direct drive pulse shape results in a high adiabat (>3) and low convergence (<17) implosion designed to produce a near 1D-like implosion. We take a particular attention to design a robust implosion with respect to long-wavelength non uniformity seeded by power imbalance and target offset. This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344.

  17. UAV Mission Planning under Uncertainty

    DTIC Science & Technology

    2006-06-01

    algorithm , adapted from [13] . 57 4-5 Robust Optimization considers only a subset of the feasible region . 61 5-1 Overview of simulation with parameter...incorporates the robust optimization method suggested by Bertsimas and Sim [12], and is solved with a standard Branch- and-Cut algorithm . The chapter... algorithms , and the heuristic methods of Local Search methods and Simulated Annealing. With each method, we attempt to give a review of research that has

  18. Combined analysis of whole human blood parameters by Raman spectroscopy and spectral-domain low-coherence interferometry

    NASA Astrophysics Data System (ADS)

    Gnyba, M.; Wróbel, M. S.; Karpienko, K.; Milewska, D.; Jedrzejewska-Szczerska, M.

    2015-07-01

    In this article the simultaneous investigation of blood parameters by complementary optical methods, Raman spectroscopy and spectral-domain low-coherence interferometry, is presented. Thus, the mutual relationship between chemical and physical properties may be investigated, because low-coherence interferometry measures optical properties of the investigated object, while Raman spectroscopy gives information about its molecular composition. A series of in-vitro measurements were carried out to assess sufficient accuracy for monitoring of blood parameters. A vast number of blood samples with various hematological parameters, collected from different donors, were measured in order to achieve a statistical significance of results and validation of the methods. Preliminary results indicate the benefits in combination of presented complementary methods and form the basis for development of a multimodal system for rapid and accurate optical determination of selected parameters in whole human blood. Future development of optical systems and multivariate calibration models are planned to extend the number of detected blood parameters and provide a robust quantitative multi-component analysis.

  19. Learning free energy landscapes using artificial neural networks.

    PubMed

    Sidky, Hythem; Whitmer, Jonathan K

    2018-03-14

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  20. A methodology for quadrilateral finite element mesh coarsening

    DOE PAGES

    Staten, Matthew L.; Benzley, Steven; Scott, Michael

    2008-03-27

    High fidelity finite element modeling of continuum mechanics problems often requires using all quadrilateral or all hexahedral meshes. The efficiency of such models is often dependent upon the ability to adapt a mesh to the physics of the phenomena. Adapting a mesh requires the ability to both refine and/or coarsen the mesh. The algorithms available to refine and coarsen triangular and tetrahedral meshes are very robust and efficient. However, the ability to locally and conformally refine or coarsen all quadrilateral and all hexahedral meshes presents many difficulties. Some research has been done on localized conformal refinement of quadrilateral and hexahedralmore » meshes. However, little work has been done on localized conformal coarsening of quadrilateral and hexahedral meshes. A general method which provides both localized conformal coarsening and refinement for quadrilateral meshes is presented in this paper. This method is based on restructuring the mesh with simplex manipulations to the dual of the mesh. Finally, this method appears to be extensible to hexahedral meshes in three dimensions.« less

  1. Evaluation of Feruloylated and p-Coumaroylated Arabinosyl Units in Grass Arabinoxylans by Acidolysis in Dioxane/Methanol.

    PubMed

    Lapierre, Catherine; Voxeur, Aline; Karlen, Steven D; Helm, Richard F; Ralph, John

    2018-05-30

    The arabinosyl side chains of grass arabinoxylans are partially acylated by p-coumarate ( pCA) and ferulate (FA). These aromatic side chains can cross-couple wall polymers resulting in modulation of cell wall physical properties. The determination of p-coumaroylated and feruloylated arabinose units has been the target of analytical efforts with trifluoroacetic acid hydrolysis the standard method to release feruloylated and p-coumaroylated arabinose units from arabinoxylans. Herein, we report on a more robust method to measure these acylated units. Acidolysis of extractive-free grass samples in a dioxane/methanol/aqueous 2 M HCl mixture provided the methyl 5- O- p-coumaroyl- and 5- O-feruloyl-l-arabinofuranoside anomers ( pCA-MeAra and FA-MeAra). These conjugates were readily analyzed by liquid chromatography combined with both UV and MS detection. The method revealed the variability of the relative acylation of arabinose units by pCA or FA in grass cell walls. This methodology will permit delineation of hydroxycinnamate acylation patterns in arabinoxylans.

  2. Automated clustering of probe molecules from solvent mapping of protein surfaces: new algorithms applied to hot-spot mapping and structure-based drug design

    NASA Astrophysics Data System (ADS)

    Lerner, Michael G.; Meagher, Kristin L.; Carlson, Heather A.

    2008-10-01

    Use of solvent mapping, based on multiple-copy minimization (MCM) techniques, is common in structure-based drug discovery. The minima of small-molecule probes define locations for complementary interactions within a binding pocket. Here, we present improved methods for MCM. In particular, a Jarvis-Patrick (JP) method is outlined for grouping the final locations of minimized probes into physical clusters. This algorithm has been tested through a study of protein-protein interfaces, showing the process to be robust, deterministic, and fast in the mapping of protein "hot spots." Improvements in the initial placement of probe molecules are also described. A final application to HIV-1 protease shows how our automated technique can be used to partition data too complicated to analyze by hand. These new automated methods may be easily and quickly extended to other protein systems, and our clustering methodology may be readily incorporated into other clustering packages.

  3. Learning free energy landscapes using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Sidky, Hythem; Whitmer, Jonathan K.

    2018-03-01

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  4. Computation of incompressible viscous flows through artificial heart devices with moving boundaries

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Rogers, Stuart; Kwak, Dochan; Chang, I.-DEE

    1991-01-01

    The extension of computational fluid dynamics techniques to artificial heart flow simulations is illustrated. Unsteady incompressible Navier-Stokes equations written in 3-D generalized curvilinear coordinates are solved iteratively at each physical time step until the incompressibility condition is satisfied. The solution method is based on the pseudo compressibility approach and uses an implicit upwind differencing scheme together with the Gauss-Seidel line relaxation method. The efficiency and robustness of the time accurate formulation of the algorithm are tested by computing the flow through model geometries. A channel flow with a moving indentation is computed and validated with experimental measurements and other numerical solutions. In order to handle the geometric complexity and the moving boundary problems, a zonal method and an overlapping grid embedding scheme are used, respectively. Steady state solutions for the flow through a tilting disk heart valve was compared against experimental measurements. Good agreement was obtained. The flow computation during the valve opening and closing is carried out to illustrate the moving boundary capability.

  5. Novel multimodality segmentation using level sets and Jensen-Rényi divergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markel, Daniel, E-mail: daniel.markel@mail.mcgill.ca; Zaidi, Habib; Geneva Neuroscience Center, Geneva University, CH-1205 Geneva

    2013-12-15

    Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. Methods: A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set activemore » contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. Results: The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with aR{sup 2} value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. Conclusions: The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.« less

  6. Interventional radiology virtual simulator for liver biopsy.

    PubMed

    Villard, P F; Vidal, F P; ap Cenydd, L; Holbrey, R; Pisharody, S; Johnson, S; Bulpitt, A; John, N W; Bello, F; Gould, D

    2014-03-01

    Training in Interventional Radiology currently uses the apprenticeship model, where clinical and technical skills of invasive procedures are learnt during practice in patients. This apprenticeship training method is increasingly limited by regulatory restrictions on working hours, concerns over patient risk through trainees' inexperience and the variable exposure to case mix and emergencies during training. To address this, we have developed a computer-based simulation of visceral needle puncture procedures. A real-time framework has been built that includes: segmentation, physically based modelling, haptics rendering, pseudo-ultrasound generation and the concept of a physical mannequin. It is the result of a close collaboration between different universities, involving computer scientists, clinicians, clinical engineers and occupational psychologists. The technical implementation of the framework is a robust and real-time simulation environment combining a physical platform and an immersive computerized virtual environment. The face, content and construct validation have been previously assessed, showing the reliability and effectiveness of this framework, as well as its potential for teaching visceral needle puncture. A simulator for ultrasound-guided liver biopsy has been developed. It includes functionalities and metrics extracted from cognitive task analysis. This framework can be useful during training, particularly given the known difficulties in gaining significant practice of core skills in patients.

  7. Chaotic attractors and physical measures for some density dependent Leslie population models

    NASA Astrophysics Data System (ADS)

    Ugarcovici, Ilie; Weiss, Howard

    2007-12-01

    Following ecologists' discoveries, mathematicians have begun studying extensions of the ubiquitous age structured Leslie population model that allow some survival probabilities and/or fertility rates to depend on population densities. These nonlinear extensions commonly exhibit very complicated dynamics: through computer studies, some authors have discovered robust Hénon-like strange attractors in several families. Population biologists and demographers frequently wish to average a function over many generations and conclude that the average is independent of the initial population distribution. This type of 'ergodicity' seems to be a fundamental tenet in population biology. In this paper we develop the first rigorous ergodic theoretic framework for density dependent Leslie population models. We study two generation models with Ricker and Hassell (recruitment type) fertility terms. We prove that for some parameter regions these models admit a chaotic (ergodic) attractor which supports a unique physical probability measure. This physical measure, having full Lebesgue measure basin, satisfies in the strongest possible sense the population biologist's requirement for ergodicity in their population models. We use the celebrated work of Wang and Young 2001 Commun. Math. Phys. 218 1-97, and our results are the first applications of their method to biology, ecology or demography.

  8. Kansei engineering as a tool for the design of in-vehicle rubber keypads.

    PubMed

    Vieira, Joana; Osório, Joana Maria A; Mouta, Sandra; Delgado, Pedro; Portinha, Aníbal; Meireles, José Filipe; Santos, Jorge Almeida

    2017-05-01

    Manufacturers are currently adopting a consumer-centered philosophy which poses the challenge of developing differentiating products in a context of constant innovation and competitiveness. To merge both function and experience in a product, it is necessary to understand customers' experience when interacting with interfaces. This paper describes the use of Kansei methodology as a tool to evaluate the subjective perception of rubber keypads. Participants evaluated eleven rubber keys with different values of force, stroke and snap ratio, according to seven Kansei words ranging from "pleasantness" to "clickiness". Evaluation data was collected using the semantic differential technique and compared with data from the physical properties of the keys. Kansei proved to be a robust method to evaluate the qualitative traits of products, and a new physical parameter for the tactile feel of "clickiness" is suggested, having obtained better results than the commonly used Snap Ratio. It was possible to establish very strong relations between Kansei words and all physical properties. This approach will result in guidance to the industry for the design of in-vehicle rubber keypads with user-centered concerns. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Numerical Study of High-Speed Droplet Impact on Surfaces and its Physical Cleaning Effects

    NASA Astrophysics Data System (ADS)

    Kondo, Tomoki; Ando, Keita

    2015-11-01

    Spurred by the demand for cleaning techniques of low environmental impact, one favors physical cleaning that does not rely on any chemicals. One of the promising candidates is based on water jets that often involve fission into droplet fragments and collide with target surfaces to which contaminant particles (often micron-sized or even smaller) stick. Hydrodynamic force (e.g., shearing and lifting) arising from the droplet impact will play a role to remove the particles, but its detailed mechanism is still unknown. To explore the role of high-speed droplet impact in physical cleaning, we solve compressible Navier-Stokes equations with a finite volume method that is designed to capture both shocks and material interfaces in accurate and robust manners. Water hammer and shear flow accompanied by high-speed droplet impact at a rigid wall is simulated to evaluate lifting force and rotating torque, which are relevant to the application of particle removal. For the simulation, we use the numerical code recently developed by Computational Flow Group lead by Tim Colonius at Caltech. The first author thanks Jomela Meng for her help in handling the code during his stay at Caltech.

  10. Bandgap profiling in CIGS solar cells via valence electron energy-loss spectroscopy

    NASA Astrophysics Data System (ADS)

    Deitz, Julia I.; Karki, Shankar; Marsillac, Sylvain X.; Grassman, Tyler J.; McComb, David W.

    2018-03-01

    A robust, reproducible method for the extraction of relative bandgap trends from scanning transmission electron microscopy (STEM) based electron energy-loss spectroscopy (EELS) is described. The effectiveness of the approach is demonstrated by profiling the bandgap through a CuIn1-xGaxSe2 solar cell that possesses intentional Ga/(In + Ga) composition variation. The EELS-determined bandgap profile is compared to the nominal profile calculated from compositional data collected via STEM-based energy dispersive X-ray spectroscopy. The EELS based profile is found to closely track the calculated bandgap trends, with only a small, fixed offset difference. This method, which is particularly advantageous for relatively narrow bandgap materials and/or STEM systems with modest resolution capabilities (i.e., >100 meV), compromises absolute accuracy to provide a straightforward route for the correlation of local electronic structure trends with nanoscale chemical and physical structure/microstructure within semiconductor materials and devices.

  11. Verification of low-Mach number combustion codes using the method of manufactured solutions

    NASA Astrophysics Data System (ADS)

    Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz

    2007-11-01

    Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.

  12. A novel double loop control model design for chemical unstable processes.

    PubMed

    Cong, Er-Ding; Hu, Ming-Hui; Tu, Shan-Tung; Xuan, Fu-Zhen; Shao, Hui-He

    2014-03-01

    In this manuscript, based on Smith predictor control scheme for unstable process in industry, an improved double loop control model is proposed for chemical unstable processes. Inner loop is to stabilize integrating the unstable process and transform the original process to first-order plus pure dead-time dynamic stable process. Outer loop is to enhance the performance of set point response. Disturbance controller is designed to enhance the performance of disturbance response. The improved control system is simple with exact physical meaning. The characteristic equation is easy to realize stabilization. Three controllers are separately design in the improved scheme. It is easy to design each controller and good control performance for the respective closed-loop transfer function separately. The robust stability of the proposed control scheme is analyzed. Finally, case studies illustrate that the improved method can give better system performance than existing design methods. © 2013 ISA Published by ISA All rights reserved.

  13. Radiological Characterization Methodology of INEEL Stored RH-TRU Waste from ANL-E

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajiv N. Bhatt

    2003-02-01

    An Acceptable Knowledge (AK)-based radiological characterization methodology is being developed for RH TRU waste generated from ANL-E hot cell operations performed on fuel elements irradiated in the EBR-II reactor. The methodology relies on AK for composition of the fresh fuel elements, their irradiation history, and the waste generation and collection processes. Radiological characterization of the waste involves the estimates of the quantities of significant fission products and transuranic isotopes in the waste. Methods based on reactor and physics principles are used to achieve these estimates. Because of the availability of AK and the robustness of the calculation methods, the AK-basedmore » characterization methodology offers a superior alternative to traditional waste assay techniques. Using this methodology, it is shown that the radiological parameters of a test batch of ANL-E waste is well within the proposed WIPP Waste Acceptance Criteria limits.« less

  14. Radiological Characterization Methodology for INEEL-Stored Remote-Handled Transuranic (RH TRU) Waste from Argonne National Laboratory-East

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuan, P.; Bhatt, R.N.

    2003-01-14

    An Acceptable Knowledge (AK)-based radiological characterization methodology is being developed for RH TRU waste generated from ANL-E hot cell operations performed on fuel elements irradiated in the EBR-II reactor. The methodology relies on AK for composition of the fresh fuel elements, their irradiation history, and the waste generation and collection processes. Radiological characterization of the waste involves the estimates of the quantities of significant fission products and transuranic isotopes in the waste. Methods based on reactor and physics principles are used to achieve these estimates. Because of the availability of AK and the robustness of the calculation methods, the AK-basedmore » characterization methodology offers a superior alternative to traditional waste assay techniques. Using the methodology, it is shown that the radiological parameters of a test batch of ANL-E waste is well within the proposed WIPP Waste Acceptance Criteria limits.« less

  15. Quantum-Sequencing: Biophysics of quantum tunneling through nucleic acids

    NASA Astrophysics Data System (ADS)

    Casamada Ribot, Josep; Chatterjee, Anushree; Nagpal, Prashant

    2014-03-01

    Tunneling microscopy and spectroscopy has extensively been used in physical surface sciences to study quantum tunneling to measure electronic local density of states of nanomaterials and to characterize adsorbed species. Quantum-Sequencing (Q-Seq) is a new method based on tunneling microscopy for electronic sequencing of single molecule of nucleic acids. A major goal of third-generation sequencing technologies is to develop a fast, reliable, enzyme-free single-molecule sequencing method. Here, we present the unique ``electronic fingerprints'' for all nucleotides on DNA and RNA using Q-Seq along their intrinsic biophysical parameters. We have analyzed tunneling spectra for the nucleotides at different pH conditions and analyzed the HOMO, LUMO and energy gap for all of them. In addition we show a number of biophysical parameters to further characterize all nucleobases (electron and hole transition voltage and energy barriers). These results highlight the robustness of Q-Seq as a technique for next-generation sequencing.

  16. Physical stability and resistance to peroxidation of a range of liquid-fill hard gelatin capsule products on extreme long-term storage.

    PubMed

    Bowtle, William; Kanyowa, Lionel; Mackenzie, Mark; Higgins, Paul

    2011-06-01

    The industrial take-up of liquid-fill hard capsule technology is limited in part by lack of published long-term physical and chemical stability data which demonstrate the robustness of the system. To assess the effects of extreme long-term storage on liquid-fill capsule product quality and integrity, with respect to both the capsules per se and a standard blister-pack type (foil-film blister). Fourteen sets of stored peroxidation-sensitive liquid-fill hard gelatin capsule product samples, originating ~20 years from the current study, were examined with respect to physical and selected chemical properties, together with microbiological evaluation. All sets retained physical integrity of capsules and blister-packs. Capsules were free of leaks, gelatin cross-linking, and microbiological growth. Eight samples met a limit (anisidine value, 20) commonly used as an index of peroxidation for lipid-based products with shelf lives of 2-3 years. Foil-film blister-packs using PVC or PVC-PVdC as the thermoforming film were well-suited packaging components for the liquid-fill capsule format. The study confirms the long-term physical robustness of the liquid-fill hard capsule format, together with its manufacturing and banding processes. It also indicates that various peroxidation-sensitive products using the capsule format may be maintained satisfactorily over very prolonged storage periods.

  17. Inferring the photometric and size evolution of galaxies from image simulations. I. Method

    NASA Astrophysics Data System (ADS)

    Carassou, Sébastien; de Lapparent, Valérie; Bertin, Emmanuel; Le Borgne, Damien

    2017-09-01

    Context. Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. Aims: To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. Methods: We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Markov chain Monte Carlo methods. Results: Using synthetic data matching most of the properties of a Canada-France-Hawaii Telescope Legacy Survey Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach. Conclusions: Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (three photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from systematic biases.

  18. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  19. Neural network uncertainty assessment using Bayesian statistics: a remote sensing application

    NASA Technical Reports Server (NTRS)

    Aires, F.; Prigent, C.; Rossow, W. B.

    2004-01-01

    Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component analysis is proposed to suppress the multicollinearities in order to make these Jacobians robust and physically meaningful.

  20. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

Top