A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
Probability techniques for reliability analysis of composite materials
NASA Technical Reports Server (NTRS)
Wetherhold, Robert C.; Ucci, Anthony M.
1994-01-01
Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
An information diffusion technique to assess integrated hazard risks.
Huang, Chongfu; Huang, Yundong
2018-02-01
An integrated risk is a scene in the future associated with some adverse incident caused by multiple hazards. An integrated probability risk is the expected value of disaster. Due to the difficulty of assessing an integrated probability risk with a small sample, weighting methods and copulas are employed to avoid this obstacle. To resolve the problem, in this paper, we develop the information diffusion technique to construct a joint probability distribution and a vulnerability surface. Then, an integrated risk can be directly assessed by using a small sample. A case of an integrated risk caused by flood and earthquake is given to show how the suggested technique is used to assess the integrated risk of annual property loss. Copyright © 2017 Elsevier Inc. All rights reserved.
Fusion of Scores in a Detection Context Based on Alpha Integration.
Soriano, Antonio; Vergara, Luis; Ahmed, Bouziane; Salazar, Addisson
2015-09-01
We present a new method for fusing scores corresponding to different detectors (two-hypotheses case). It is based on alpha integration, which we have adapted to the detection context. Three optimization methods are presented: least mean square error, maximization of the area under the ROC curve, and minimization of the probability of error. Gradient algorithms are proposed for the three methods. Different experiments with simulated and real data are included. Simulated data consider the two-detector case to illustrate the factors influencing alpha integration and demonstrate the improvements obtained by score fusion with respect to individual detector performance. Two real data cases have been considered. In the first, multimodal biometric data have been processed. This case is representative of scenarios in which the probability of detection is to be maximized for a given probability of false alarm. The second case is the automatic analysis of electroencephalogram and electrocardiogram records with the aim of reproducing the medical expert detections of arousal during sleeping. This case is representative of scenarios in which probability of error is to be minimized. The general superior performance of alpha integration verifies the interest of optimizing the fusing parameters.
NASA Astrophysics Data System (ADS)
Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia
2016-10-01
Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.
Option volatility and the acceleration Lagrangian
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Cao, Yang
2014-01-01
This paper develops a volatility formula for option on an asset from an acceleration Lagrangian model and the formula is calibrated with market data. The Black-Scholes model is a simpler case that has a velocity dependent Lagrangian. The acceleration Lagrangian is defined, and the classical solution of the system in Euclidean time is solved by choosing proper boundary conditions. The conditional probability distribution of final position given the initial position is obtained from the transition amplitude. The volatility is the standard deviation of the conditional probability distribution. Using the conditional probability and the path integral method, the martingale condition is applied, and one of the parameters in the Lagrangian is fixed. The call option price is obtained using the conditional probability and the path integral method.
Advanced reliability methods for structural evaluation
NASA Technical Reports Server (NTRS)
Wirsching, P. H.; Wu, Y.-T.
1985-01-01
Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.
Predicting the probability of slip in gait: methodology and distribution study.
Gragg, Jared; Yang, James
2016-01-01
The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
Student Solution Manual for Essential Mathematical Methods for the Physical Sciences
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2011-02-01
1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics.
Essential Mathematical Methods for the Physical Sciences
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2011-02-01
1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics; Appendices; Index.
The description of two-photon Rabi oscillations in the path integral approach
NASA Astrophysics Data System (ADS)
Biryukov, A. A.; Degtyareva, Ya. V.; Shleenkov, M. A.
2018-04-01
The probability of quantum transitions of a molecule between its states under the action of an electromagnetic field is represented as an integral over trajectories from a real alternating functional. A method is proposed for computing the integral using recurrence relations. The method is attached to describe the two-photon Rabi oscillations.
Structural reliability methods: Code development status
NASA Astrophysics Data System (ADS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-05-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Structural reliability methods: Code development status
NASA Technical Reports Server (NTRS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-01-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
A Discrete Probability Function Method for the Equation of Radiative Transfer
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
Probabilistic methods for rotordynamics analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.
1991-01-01
This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.
NASA Astrophysics Data System (ADS)
Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide
2016-12-01
We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.
Robust approaches to quantification of margin and uncertainty for sparse data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin
Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less
Improved Measures of Integrated Information
Tegmark, Max
2016-01-01
Although there is growing interest in measuring integrated information in computational and cognitive systems, current methods for doing so in practice are computationally unfeasible. Existing and novel integration measures are investigated and classified by various desirable properties. A simple taxonomy of Φ-measures is presented where they are each characterized by their choice of factorization method (5 options), choice of probability distributions to compare (3 × 4 options) and choice of measure for comparing probability distributions (7 options). When requiring the Φ-measures to satisfy a minimum of attractive properties, these hundreds of options reduce to a mere handful, some of which turn out to be identical. Useful exact and approximate formulas are derived that can be applied to real-world data from laboratory experiments without posing unreasonable computational demands. PMID:27870846
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M
2011-07-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I.; Marcotte, Edward M.
2011-01-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for all possible PSMs and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for all detected proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses. PMID:21488652
Integrating Future Information through Scenarios. AIR 1985 Annual Forum Paper.
ERIC Educational Resources Information Center
Zentner, Rene D.
The way that higher education planners can take into account changes in the post-industrial society is discussed. The scenario method is proposed as a method of integrating futures information. The planner can be provided with several probable futures, each of which can be incorporated in a scenario. An effective scenario provides the planner…
Cuba: Multidimensional numerical integration library
NASA Astrophysics Data System (ADS)
Hahn, Thomas
2016-08-01
The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.
Storm-based Cloud-to-Ground Lightning Probabilities and Warnings
NASA Astrophysics Data System (ADS)
Calhoun, K. M.; Meyer, T.; Kingfield, D.
2017-12-01
A new cloud-to-ground (CG) lightning probability algorithm has been developed using machine-learning methods. With storm-based inputs of Earth Networks' in-cloud lightning, Vaisala's CG lightning, multi-radar/multi-sensor (MRMS) radar derived products including the Maximum Expected Size of Hail (MESH) and Vertically Integrated Liquid (VIL), and near storm environmental data including lapse rate and CAPE, a random forest algorithm was trained to produce probabilities of CG lightning up to one-hour in advance. As part of the Prototype Probabilistic Hazard Information experiment in the Hazardous Weather Testbed in 2016 and 2017, National Weather Service forecasters were asked to use this CG lightning probability guidance to create rapidly updating probability grids and warnings for the threat of CG lightning for 0-60 minutes. The output from forecasters was shared with end-users, including emergency managers and broadcast meteorologists, as part of an integrated warning team.
Probabilistic structural analysis methods for select space propulsion system components
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Cruse, T. A.
1989-01-01
The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.
Integration of Geophysical Methods By A Generalised Probability Tomography Approach
NASA Astrophysics Data System (ADS)
Mauriello, P.; Patella, D.
In modern science, the propensity interpretative approach stands on the assumption that any physical system consists of two kinds of reality: actual and potential. Also geophysical data systems have potentialities that extend far beyond the few actual models normally attributed to them. Indeed, any geophysical data set is in itself quite inherently ambiguous. Classical deterministic inversion, including tomography, usu- ally forces a measured data set to collapse into a few rather subjective models based on some available a priori information. Classical interpretation is thus an intrinsically limited approach requiring a very deep logical extension. We think that a way to high- light a system full potentiality is to introduce probability as the leading paradigm in dealing with field data systems. Probability tomography has been recently introduced as a completely new approach to data interpretation. Probability tomography has been originally formulated for the self-potential method. It has been then extended to geo- electric, natural source electromagnetic induction, gravity and magnetic methods. Fol- lowing the same rationale, in this paper we generalize the probability tomography the- ory to a generic geophysical anomaly vector field, including the treatment for scalar fields as a particular case. This generalization makes then possible to address for the first time the problem of the integration of different methods by a conjoint probabil- ity tomography imaging procedure. The aim is to infer the existence of an unknown buried object through the analysis of an ad hoc occurrence probability function, blend- ing the physical messages brought forth by a set of singularly observed anomalies.
Reply to Efford on ‘Integrating resource selection information with spatial capture-recapture’
Royle, Andy; Chandler, Richard; Sun, Catherine C.; Fuller, Angela K.
2014-01-01
3. A key point of Royle et al. (Methods in Ecology and Evolution, 2013, 4) was that active resource selection induces heterogeneity in encounter probability which, if unaccounted for, should bias estimates of population size or density. The models of Royle et al. (Methods in Ecology and Evolution, 2013, 4) and Efford (Methods in Ecology and Evolution, 2014, 000, 000) merely amount to alternative models of resource selection, and hence varying amounts of heterogeneity in encounter probability.
Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lin; Dai, Zhenxue; Gong, Huili
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
NASA Astrophysics Data System (ADS)
Wang, Yu; Jiang, Wenchun; Luo, Yun; Zhang, Yucai; Tu, Shan-Tung
2017-12-01
The reduction and re-oxidation of anode have significant effects on the integrity of the solid oxide fuel cell (SOFC) sealed by the glass-ceramic (GC). The mechanical failure is mainly controlled by the stress distribution. Therefore, a three dimensional model of SOFC is established to investigate the stress evolution during the reduction and re-oxidation by finite element method (FEM) in this paper, and the failure probability is calculated using the Weibull method. The results demonstrate that the reduction of anode can decrease the thermal stresses and reduce the failure probability due to the volumetric contraction and porosity increasing. The re-oxidation can result in a remarkable increase of the thermal stresses, and the failure probabilities of anode, cathode, electrolyte and GC all increase to 1, which is mainly due to the large linear strain rather than the porosity decreasing. The cathode and electrolyte fail as soon as the linear strains are about 0.03% and 0.07%. Therefore, the re-oxidation should be controlled to ensure the integrity, and a lower re-oxidation temperature can decrease the stress and failure probability.
Paninski, Liam; Haith, Adrian; Szirtes, Gabor
2008-02-01
We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.
Research of the orbital evolution of asteroid 2012 DA14 (in Russian)
NASA Astrophysics Data System (ADS)
Zausaev, A. F.; Denisov, S. S.; Derevyanka, A. E.
Research of the orbital evolution of asteroid 2012 DA14 on the time interval from 1800 to 2206 is made, an object close approaches with Earth and the Moon are detected, the probability of impact with Earth is calculated. The used mathematical model is consistent with the DE405, the integration was performed using a modified Everhart's method of 27th order, the probability of collision is calculated using the Monte Carlo method.
Probabilistic structural analysis methods and applications
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Wu, Y.-T.; Dias, B.; Rajagopal, K. R.
1988-01-01
An advanced algorithm for simulating the probabilistic distribution of structural responses due to statistical uncertainties in loads, geometry, material properties, and boundary conditions is reported. The method effectively combines an advanced algorithm for calculating probability levels for multivariate problems (fast probability integration) together with a general-purpose finite-element code for stress, vibration, and buckling analysis. Application is made to a space propulsion system turbine blade for which the geometry and material properties are treated as random variables.
Enforcing positivity in intrusive PC-UQ methods for reactive ODE systems
Najm, Habib N.; Valorani, Mauro
2014-04-12
We explore the relation between the development of a non-negligible probability of negative states and the instability of numerical integration of the intrusive Galerkin ordinary differential equation system describing uncertain chemical ignition. To prevent this instability without resorting to either multi-element local polynomial chaos (PC) methods or increasing the order of the PC representation in time, we propose a procedure aimed at modifying the amplitude of the PC modes to bring the probability of negative state values below a user-defined threshold. This modification can be effectively described as a filtering procedure of the spectral PC coefficients, which is applied on-the-flymore » during the numerical integration when the current value of the probability of negative states exceeds the prescribed threshold. We demonstrate the filtering procedure using a simple model of an ignition process in a batch reactor. This is carried out by comparing different observables and error measures as obtained by non-intrusive Monte Carlo and Gauss-quadrature integration and the filtered intrusive procedure. Lastly, the filtering procedure has been shown to effectively stabilize divergent intrusive solutions, and also to improve the accuracy of stable intrusive solutions which are close to the stability limits.« less
Random Numbers and Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
Hostetter, Nathan J.; Evans, Allen F.; Cramer, Bradley M.; Collis, Ken; Lyons, Donald E.; Roby, Daniel D.
2015-01-01
Accurate assessment of specific mortality factors is vital to prioritize recovery actions for threatened and endangered species. For decades, tag recovery methods have been used to estimate fish mortality due to avian predation. Predation probabilities derived from fish tag recoveries on piscivorous waterbird colonies typically reflect minimum estimates of predation due to an unknown and unaccounted-for fraction of tags that are consumed but not deposited on-colony (i.e., deposition probability). We applied an integrated tag recovery modeling approach in a Bayesian context to estimate predation probabilities that accounted for predator-specific tag detection and deposition probabilities in a multiple-predator system. Studies of PIT tag deposition were conducted across three bird species nesting at seven different colonies in the Columbia River basin, USA. Tag deposition probabilities differed significantly among predator species (Caspian ternsHydroprogne caspia: deposition probability = 0.71, 95% credible interval [CRI] = 0.51–0.89; double-crested cormorants Phalacrocorax auritus: 0.51, 95% CRI = 0.34–0.70; California gulls Larus californicus: 0.15, 95% CRI = 0.11–0.21) but showed little variation across trials within a species or across years. Data from a 6-year study (2008–2013) of PIT-tagged juvenile Snake River steelhead Oncorhynchus mykiss (listed as threatened under the Endangered Species Act) indicated that colony-specific predation probabilities ranged from less than 0.01 to 0.17 and varied by predator species, colony location, and year. Integrating the predator-specific deposition probabilities increased the predation probabilities by a factor of approximately 1.4 for Caspian terns, 2.0 for double-crested cormorants, and 6.7 for California gulls compared with traditional minimum predation rate methods, which do not account for deposition probabilities. Results supported previous findings on the high predation impacts from strictly piscivorous waterbirds nesting in the Columbia River estuary (i.e., terns and cormorants), but our findings also revealed greater impacts of a generalist predator species (i.e., California gulls) than were previously documented. Approaches used in this study allow for direct comparisons among multiple fish mortality factors and considerably improve the reliability of tag recovery models for estimating predation probabilities in multiple-predator systems.
Xie, Xin-Ping; Xie, Yu-Feng; Wang, Hong-Qiang
2017-08-23
Large-scale accumulation of omics data poses a pressing challenge of integrative analysis of multiple data sets in bioinformatics. An open question of such integrative analysis is how to pinpoint consistent but subtle gene activity patterns across studies. Study heterogeneity needs to be addressed carefully for this goal. This paper proposes a regulation probability model-based meta-analysis, jGRP, for identifying differentially expressed genes (DEGs). The method integrates multiple transcriptomics data sets in a gene regulatory space instead of in a gene expression space, which makes it easy to capture and manage data heterogeneity across studies from different laboratories or platforms. Specifically, we transform gene expression profiles into a united gene regulation profile across studies by mathematically defining two gene regulation events between two conditions and estimating their occurring probabilities in a sample. Finally, a novel differential expression statistic is established based on the gene regulation profiles, realizing accurate and flexible identification of DEGs in gene regulation space. We evaluated the proposed method on simulation data and real-world cancer datasets and showed the effectiveness and efficiency of jGRP in identifying DEGs identification in the context of meta-analysis. Data heterogeneity largely influences the performance of meta-analysis of DEGs identification. Existing different meta-analysis methods were revealed to exhibit very different degrees of sensitivity to study heterogeneity. The proposed method, jGRP, can be a standalone tool due to its united framework and controllable way to deal with study heterogeneity.
Utility of inverse probability weighting in molecular pathological epidemiology.
Liu, Li; Nevo, Daniel; Nishihara, Reiko; Cao, Yin; Song, Mingyang; Twombly, Tyler S; Chan, Andrew T; Giovannucci, Edward L; VanderWeele, Tyler J; Wang, Molin; Ogino, Shuji
2018-04-01
As one of causal inference methodologies, the inverse probability weighting (IPW) method has been utilized to address confounding and account for missing data when subjects with missing data cannot be included in a primary analysis. The transdisciplinary field of molecular pathological epidemiology (MPE) integrates molecular pathological and epidemiological methods, and takes advantages of improved understanding of pathogenesis to generate stronger biological evidence of causality and optimize strategies for precision medicine and prevention. Disease subtyping based on biomarker analysis of biospecimens is essential in MPE research. However, there are nearly always cases that lack subtype information due to the unavailability or insufficiency of biospecimens. To address this missing subtype data issue, we incorporated inverse probability weights into Cox proportional cause-specific hazards regression. The weight was inverse of the probability of biomarker data availability estimated based on a model for biomarker data availability status. The strategy was illustrated in two example studies; each assessed alcohol intake or family history of colorectal cancer in relation to the risk of developing colorectal carcinoma subtypes classified by tumor microsatellite instability (MSI) status, using a prospective cohort study, the Nurses' Health Study. Logistic regression was used to estimate the probability of MSI data availability for each cancer case with covariates of clinical features and family history of colorectal cancer. This application of IPW can reduce selection bias caused by nonrandom variation in biospecimen data availability. The integration of causal inference methods into the MPE approach will likely have substantial potentials to advance the field of epidemiology.
Master equations and the theory of stochastic path integrals
NASA Astrophysics Data System (ADS)
Weber, Markus F.; Frey, Erwin
2017-04-01
This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a ‘generating functional’, which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a ‘forward’ and a ‘backward’ path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers-Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker-Planck equation. One can rewrite this path integral in terms of an Onsager-Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.
Master equations and the theory of stochastic path integrals.
Weber, Markus F; Frey, Erwin
2017-04-01
This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a 'generating functional', which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a 'forward' and a 'backward' path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers-Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker-Planck equation. One can rewrite this path integral in terms of an Onsager-Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.
NASA Astrophysics Data System (ADS)
Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng
2017-12-01
There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.
NASA Astrophysics Data System (ADS)
Xiao Yong, Zhao; Xin, Ji Yong; Shuang Ying, Zuo
2018-03-01
In order to effectively classify the surrounding rock types of tunnels, a multi-factor tunnel surrounding rock classification method based on GPR and probability theory is proposed. Geological radar was used to identify the geology of the surrounding rock in front of the face and to evaluate the quality of the rock face. According to the previous survey data, the rock uniaxial compressive strength, integrity index, fissure and groundwater were selected for classification. The related theories combine them into a multi-factor classification method, and divide the surrounding rocks according to the great probability. Using this method to classify the surrounding rock of the Ma’anshan tunnel, the surrounding rock types obtained are basically the same as those of the actual surrounding rock, which proves that this method is a simple, efficient and practical rock classification method, which can be used for tunnel construction.
Mathematical Methods for Physics and Engineering Third Edition Paperback Set
NASA Astrophysics Data System (ADS)
Riley, Ken F.; Hobson, Mike P.; Bence, Stephen J.
2006-06-01
Prefaces; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics; Index.
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
Integrating resource selection information with spatial capture--recapture
Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.
2013-01-01
4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.
Prospect evaluation as a function of numeracy and probability denominator.
Millroth, Philip; Juslin, Peter
2015-05-01
This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Haris, A.; Novriyani, M.; Suparno, S.; Hidayat, R.; Riyanto, A.
2017-07-01
This study presents the integration of seismic stochastic inversion and multi-attributes for delineating the reservoir distribution in term of lithology and porosity in the formation within depth interval between the Top Sihapas and Top Pematang. The method that has been used is a stochastic inversion, which is integrated with multi-attribute seismic by applying neural network Probabilistic Neural Network (PNN). Stochastic methods are used to predict the probability mapping sandstone as the result of impedance varied with 50 realizations that will produce a good probability. Analysis of Stochastic Seismic Tnversion provides more interpretive because it directly gives the value of the property. Our experiment shows that AT of stochastic inversion provides more diverse uncertainty so that the probability value will be close to the actual values. The produced AT is then used for an input of a multi-attribute analysis, which is used to predict the gamma ray, density and porosity logs. To obtain the number of attributes that are used, stepwise regression algorithm is applied. The results are attributes which are used in the process of PNN. This PNN method is chosen because it has the best correlation of others neural network method. Finally, we interpret the product of the multi-attribute analysis are in the form of pseudo-gamma ray volume, density volume and volume of pseudo-porosity to delineate the reservoir distribution. Our interpretation shows that the structural trap is identified in the southeastern part of study area, which is along the anticline.
ERIC Educational Resources Information Center
Clinton, Virginia; Morsanyi, Kinga; Alibali, Martha W.; Nathan, Mitchell J.
2016-01-01
Learning from visual representations is enhanced when learners appropriately integrate corresponding visual and verbal information. This study examined the effects of two methods of promoting integration, color coding and labeling, on learning about probabilistic reasoning from a table and text. Undergraduate students (N = 98) were randomly…
NASA Astrophysics Data System (ADS)
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
This annual report summarizes the work completed during the third year of technical effort on the referenced contract. Principal developments continue to focus on the Probabilistic Finite Element Method (PFEM) which has been under development for three years. Essentially all of the linear capabilities within the PFEM code are in place. Major progress in the application or verifications phase was achieved. An EXPERT module architecture was designed and partially implemented. EXPERT is a user interface module which incorporates an expert system shell for the implementation of a rule-based interface utilizing the experience and expertise of the user community. The Fast Probability Integration (FPI) Algorithm continues to demonstrate outstanding performance characteristics for the integration of probability density functions for multiple variables. Additionally, an enhanced Monte Carlo simulation algorithm was developed and demonstrated for a variety of numerical strategies.
NASA Astrophysics Data System (ADS)
Vandromme, Rosalie; Thiéry, Yannick; Sedan, Olivier; Bernardie, Séverine
2016-04-01
Landslide hazard assessment is the estimation of a target area where landslides of a particular type, volume, runout and intensity may occur within a given period. The first step to analyze landslide hazard consists in assessing the spatial and temporal failure probability (when the information is available, i.e. susceptibility assessment). Two types of approach are generally recommended to achieve this goal: (i) qualitative approach (i.e. inventory based methods and knowledge data driven methods) and (ii) quantitative approach (i.e. data-driven methods or deterministic physically based methods). Among quantitative approaches, deterministic physically based methods (PBM) are generally used at local and/or site-specific scales (1:5,000-1:25,000 and >1:5,000, respectively). The main advantage of these methods is the calculation of probability of failure (safety factor) following some specific environmental conditions. For some models it is possible to integrate the land-uses and climatic change. At the opposite, major drawbacks are the large amounts of reliable and detailed data (especially materials type, their thickness and the geotechnical parameters heterogeneity over a large area) and the fact that only shallow landslides are taking into account. This is why they are often used at site-specific scales (> 1:5,000). Thus, to take into account (i) materials' heterogeneity , (ii) spatial variation of physical parameters, (iii) different landslide types, the French Geological Survey (i.e. BRGM) has developed a physically based model (PBM) implemented in a GIS environment. This PBM couples a global hydrological model (GARDENIA®) including a transient unsaturated/saturated hydrological component with a physically based model computing the stability of slopes (ALICE®, Assessment of Landslides Induced by Climatic Events) based on the Morgenstern-Price method for any slip surface. The variability of mechanical parameters is handled by Monte Carlo approach. The probability to obtain a safety factor below 1 represents the probability of occurrence of a landslide for a given triggering event. The dispersion of the distribution gives the uncertainty of the result. Finally, a map is created, displaying a probability of occurrence for each computing cell of the studied area. In order to take into account the land-uses change, a complementary module integrating the vegetation effects on soil properties has been recently developed. Last years, the model has been applied at different scales for different geomorphological environments: (i) at regional scale (1:50,000-1:25,000) in French West Indies and French Polynesian islands (ii) at local scale (i.e.1:10,000) for two complex mountainous areas; (iii) at the site-specific scale (1:2,000) for one landslide. For each study the 3D geotechnical model has been adapted. The different studies have allowed : (i) to discuss the different factors included in the model especially the initial 3D geotechnical models; (ii) to precise the location of probable failure following different hydrological scenarii; (iii) to test the effects of climatic change and land-use on slopes for two cases. In that way, future changes in temperature, precipitation and vegetation cover can be analyzed, permitting to address the impacts of global change on landslides. Finally, results show that it is possible to obtain reliable information about future slope failures at different scale of work for different scenarii with an integrated approach. The final information about landslide susceptibility (i.e. probability of failure) can be integrated in landslide hazard assessment and could be an essential information source for future land planning. As it has been performed in the ANR Project SAMCO (Society Adaptation for coping with Mountain risks in a global change COntext), this analysis constitutes a first step in the chain for risk assessment for different climate and economical development scenarios, to evaluate the resilience of mountainous areas.
Rapid extraction of image texture by co-occurrence using a hybrid data structure
NASA Astrophysics Data System (ADS)
Clausi, David A.; Zhao, Yongping
2002-07-01
Calculation of co-occurrence probabilities is a popular method for determining texture features within remotely sensed digital imagery. Typically, the co-occurrence features are calculated by using a grey level co-occurrence matrix (GLCM) to store the co-occurring probabilities. Statistics are applied to the probabilities in the GLCM to generate the texture features. This method is computationally intensive since the matrix is usually sparse leading to many unnecessary calculations involving zero probabilities when applying the statistics. An improvement on the GLCM method is to utilize a grey level co-occurrence linked list (GLCLL) to store only the non-zero co-occurring probabilities. The GLCLL suffers since, to achieve preferred computational speeds, the list should be sorted. An improvement on the GLCLL is to utilize a grey level co-occurrence hybrid structure (GLCHS) based on an integrated hash table and linked list approach. Texture features obtained using this technique are identical to those obtained using the GLCM and GLCLL. The GLCHS method is implemented using the C language in a Unix environment. Based on a Brodatz test image, the GLCHS method is demonstrated to be a superior technique when compared across various window sizes and grey level quantizations. The GLCHS method required, on average, 33.4% ( σ=3.08%) of the computational time required by the GLCLL. Significant computational gains are made using the GLCHS method.
Student Solution Manual for Mathematical Methods for Physics and Engineering Third Edition
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2006-03-01
Preface; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics.
Xu, Jason; Minin, Vladimir N
2015-07-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.
2010-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
Mines Systems Safety Improvement Using an Integrated Event Tree and Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Kumar, Ranjan; Ghosh, Achyuta Krishna
2017-04-01
Mines systems such as ventilation system, strata support system, flame proof safety equipment, are exposed to dynamic operational conditions such as stress, humidity, dust, temperature, etc., and safety improvement of such systems can be done preferably during planning and design stage. However, the existing safety analysis methods do not handle the accident initiation and progression of mine systems explicitly. To bridge this gap, this paper presents an integrated Event Tree (ET) and Fault Tree (FT) approach for safety analysis and improvement of mine systems design. This approach includes ET and FT modeling coupled with redundancy allocation technique. In this method, a concept of top hazard probability is introduced for identifying system failure probability and redundancy is allocated to the system either at component or system level. A case study on mine methane explosion safety with two initiating events is performed. The results demonstrate that the presented method can reveal the accident scenarios and improve the safety of complex mine systems simultaneously.
Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1997-01-01
The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.
Huang, Guangzao; Yuan, Mingshun; Chen, Moliang; Li, Lei; You, Wenjie; Li, Hanjie; Cai, James J; Ji, Guoli
2017-10-07
The application of machine learning in cancer diagnostics has shown great promise and is of importance in clinic settings. Here we consider applying machine learning methods to transcriptomic data derived from tumor-educated platelets (TEPs) from individuals with different types of cancer. We aim to define a reliability measure for diagnostic purposes to increase the potential for facilitating personalized treatments. To this end, we present a novel classification method called MFRB (for Multiple Fitting Regression and Bayes decision), which integrates the process of multiple fitting regression (MFR) with Bayes decision theory. MFR is first used to map multidimensional features of the transcriptomic data into a one-dimensional feature. The probability density function of each class in the mapped space is then adjusted using the Gaussian probability density function. Finally, the Bayes decision theory is used to build a probabilistic classifier with the estimated probability density functions. The output of MFRB can be used to determine which class a sample belongs to, as well as to assign a reliability measure for a given class. The classical support vector machine (SVM) and probabilistic SVM (PSVM) are used to evaluate the performance of the proposed method with simulated and real TEP datasets. Our results indicate that the proposed MFRB method achieves the best performance compared to SVM and PSVM, mainly due to its strong generalization ability for limited, imbalanced, and noisy data.
NASA Technical Reports Server (NTRS)
Kim, H.; Swain, P. H.
1991-01-01
A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.
Probabilistic structural analysis using a general purpose finite element program
NASA Astrophysics Data System (ADS)
Riha, D. S.; Millwater, H. R.; Thacker, B. H.
1992-07-01
This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.
2010-01-01
A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].
Poisson statistics of PageRank probabilities of Twitter and Wikipedia networks
NASA Astrophysics Data System (ADS)
Frahm, Klaus M.; Shepelyansky, Dima L.
2014-04-01
We use the methods of quantum chaos and Random Matrix Theory for analysis of statistical fluctuations of PageRank probabilities in directed networks. In this approach the effective energy levels are given by a logarithm of PageRank probability at a given node. After the standard energy level unfolding procedure we establish that the nearest spacing distribution of PageRank probabilities is described by the Poisson law typical for integrable quantum systems. Our studies are done for the Twitter network and three networks of Wikipedia editions in English, French and German. We argue that due to absence of level repulsion the PageRank order of nearby nodes can be easily interchanged. The obtained Poisson law implies that the nearby PageRank probabilities fluctuate as random independent variables.
Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit
NASA Astrophysics Data System (ADS)
Vittaldev, Vivek; Russell, Ryan P.
2017-09-01
Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.
Characteristics of individuals with integrated pensions.
Bender, K A
1999-01-01
Employer pensions that integrate benefits with Social Security have been the focus of relatively little research. Since changes in Social Security benefit levels and other program characteristics can affect the benefit levels and other features of integrated pension plans, it is important to know who is covered by these plans. This article examines the characteristics of workers covered by integrated pension plans, compared to those with nonintegrated plans and those with no pension coverage. Integrated pension plans are those that explicitly adjust their benefit structure to help compensate for the employer's contributions to the Social Security program. There are two basic integration methods used by defined benefit (DB) plans. The offset method causes a reduction in employer pension benefits by up to half of the Social Security retirement benefit; the excess rate method is characterized by an accrual rate that is lower for earnings below the Social Security taxable maximum than above it. Defined contribution (DC) pension plans can be integrated along the lines of the excess rate method. To date, research on integrated pensions has focused on plan characteristics, as reported to the Bureau of Labor Statistics (BLS) through its Employee Benefits Survey (EBS). This research has examined the prevalence of integration among full-time, private sector workers by industry, firm size, and broad occupational categories. However, because the EBS provides virtually no data on worker characteristics, analyses of the effects of pension integration on retirement benefits have used hypothetical workers, varying according to assumed levels of earnings and job tenure. This kind of analysis is not particularly helpful in examining the potential effects of changes in the Social Security program on workers' pension benefits. However, data on pension integration at the individual level are available, most recently from the Health and Retirement Study (HRS), a nationally representative survey of individuals aged 51-61 in 1992. This dataset provides the basis for the analysis presented here. The following are some of the major findings from this analysis. The incidence of pension integration in the HRS sample is 32 percent of all workers with a pension (14 percent of all workers). The HRS can also identify integrated DC plans, a statistic that is not available from BLS data. The rate of integration for workers with only DC plans is 8 percent. After controlling for other variables, several socio-demographic characteristics are significantly related to the incidence of integration. The probability of having an integrated pension is 4.6 percentage points less for men compared to women. Non-Hispanic blacks are 6.4 percentage points less likely than non-Hispanic whites to have integrated pensions. Union members are 14 percentage points less likely to have integrated pensions, while workers with less than a graduate level education are at least 15 percentage points more likely to have a pension that is integrated. Some earnings and pension characteristics are also significantly correlated with pension integration. Earnings are positively related, with the probability of having an integrated pension increasing by 2 percentage points for an increase of $1,000 in annual pay. An even larger effect comes from earning at or above the Social Security taxable maximum. Workers at or above this income level are 10 percentage points more likely to have an integrated plan, but for those with more than one plan the probability of pension integration goes up by 13 percentage points.
Inverse modeling methods for indoor airborne pollutant tracking: literature review and fundamentals.
Liu, X; Zhai, Z
2007-12-01
Reduction in indoor environment quality calls for effective control and improvement measures. Accurate and prompt identification of contaminant sources ensures that they can be quickly removed and contaminated spaces isolated and cleaned. This paper discusses the use of inverse modeling to identify potential indoor pollutant sources with limited pollutant sensor data. The study reviews various inverse modeling methods for advection-dispersion problems and summarizes the methods into three major categories: forward, backward, and probability inverse modeling methods. The adjoint probability inverse modeling method is indicated as an appropriate model for indoor air pollutant tracking because it can quickly find source location, strength and release time without prior information. The paper introduces the principles of the adjoint probability method and establishes the corresponding adjoint equations for both multi-zone airflow models and computational fluid dynamics (CFD) models. The study proposes a two-stage inverse modeling approach integrating both multi-zone and CFD models, which can provide a rapid estimate of indoor pollution status and history for a whole building. Preliminary case study results indicate that the adjoint probability method is feasible for indoor pollutant inverse modeling. The proposed method can help identify contaminant source characteristics (location and release time) with limited sensor outputs. This will ensure an effective and prompt execution of building management strategies and thus achieve a healthy and safe indoor environment. The method can also help design optimal sensor networks.
NASA Astrophysics Data System (ADS)
Chen, Tzikang J.; Shiao, Michael
2016-04-01
This paper verified a generic and efficient assessment concept for probabilistic fatigue life management. The concept is developed based on an integration of damage tolerance methodology, simulations methods1, 2, and a probabilistic algorithm RPI (recursive probability integration)3-9 considering maintenance for damage tolerance and risk-based fatigue life management. RPI is an efficient semi-analytical probabilistic method for risk assessment subjected to various uncertainties such as the variability in material properties including crack growth rate, initial flaw size, repair quality, random process modeling of flight loads for failure analysis, and inspection reliability represented by probability of detection (POD). In addition, unlike traditional Monte Carlo simulations (MCS) which requires a rerun of MCS when maintenance plan is changed, RPI can repeatedly use a small set of baseline random crack growth histories excluding maintenance related parameters from a single MCS for various maintenance plans. In order to fully appreciate the RPI method, a verification procedure was performed. In this study, MC simulations in the orders of several hundred billions were conducted for various flight conditions, material properties, and inspection scheduling, POD and repair/replacement strategies. Since the MC simulations are time-consuming methods, the simulations were conducted parallelly on DoD High Performance Computers (HPC) using a specialized random number generator for parallel computing. The study has shown that RPI method is several orders of magnitude more efficient than traditional Monte Carlo simulations.
Fuzzy-logic detection and probability of hail exploiting short-range X-band weather radar
NASA Astrophysics Data System (ADS)
Capozzi, Vincenzo; Picciotti, Errico; Mazzarella, Vincenzo; Marzano, Frank Silvio; Budillon, Giorgio
2018-03-01
This work proposes a new method for hail precipitation detection and probability, based on single-polarization X-band radar measurements. Using a dataset consisting of reflectivity volumes, ground truth observations and atmospheric sounding data, a probability of hail index, which provides a simple estimate of the hail potential, has been trained and adapted within Naples metropolitan environment study area. The probability of hail has been calculated starting by four different hail detection methods. The first two, based on (1) reflectivity data and temperature measurements and (2) on vertically-integrated liquid density product, respectively, have been selected from the available literature. The other two techniques are based on combined criteria of the above mentioned methods: the first one (3) is based on the linear discriminant analysis, whereas the other one (4) relies on the fuzzy-logic approach. The latter is an innovative criterion based on a fuzzyfication step performed through ramp membership functions. The performances of the four methods have been tested using an independent dataset: the results highlight that the fuzzy-oriented combined method performs slightly better in terms of false alarm ratio, critical success index and area under the relative operating characteristic. An example of application of the proposed hail detection and probability products is also presented for a relevant hail event, occurred on 21 July 2014.
Bayesian functional integral method for inferring continuous data from discrete measurements.
Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul
2012-02-08
Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Fault tree analysis for integrated and probabilistic risk analysis of drinking water systems.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof
2009-04-01
Drinking water systems are vulnerable and subject to a wide range of risks. To avoid sub-optimisation of risk-reduction options, risk analyses need to include the entire drinking water system, from source to tap. Such an integrated approach demands tools that are able to model interactions between different events. Fault tree analysis is a risk estimation tool with the ability to model interactions between events. Using fault tree analysis on an integrated level, a probabilistic risk analysis of a large drinking water system in Sweden was carried out. The primary aims of the study were: (1) to develop a method for integrated and probabilistic risk analysis of entire drinking water systems; and (2) to evaluate the applicability of Customer Minutes Lost (CML) as a measure of risk. The analysis included situations where no water is delivered to the consumer (quantity failure) and situations where water is delivered but does not comply with water quality standards (quality failure). Hard data as well as expert judgements were used to estimate probabilities of events and uncertainties in the estimates. The calculations were performed using Monte Carlo simulations. CML is shown to be a useful measure of risks associated with drinking water systems. The method presented provides information on risk levels, probabilities of failure, failure rates and downtimes of the system. This information is available for the entire system as well as its different sub-systems. Furthermore, the method enables comparison of the results with performance targets and acceptable levels of risk. The method thus facilitates integrated risk analysis and consequently helps decision-makers to minimise sub-optimisation of risk-reduction options.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
Probability theory, not the very guide of life.
Juslin, Peter; Nilsson, Håkan; Winman, Anders
2009-10-01
Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive integration, in part, at least, because of well-known capacity constraints on controlled thought. In this article, the authors show with computer simulations that when based on approximate knowledge of probabilities, as is routinely the case in natural environments, linear additive integration can yield as accurate estimates, and as good average decision returns, as estimates based on probability theory. It is proposed that in natural environments people have little opportunity or incentive to induce the normative rules of probability theory and, given their cognitive constraints, linear additive integration may often offer superior bounded rationality.
Calculation of Radar Probability of Detection in K-Distributed Sea Clutter and Noise
2011-04-01
Laguerre polynomials are generated from a recurrence relation, and the nodes and weights are calculated from the eigenvalues and eigenvectors of a...B.P. Flannery, Numerical Recipes in Fortran, Second Edition, Cambridge University Press (1992). 12. W. Gautschi, Orthogonal Polynomials (in Matlab...the integration, with the nodes and weights calculated using matrix methods, so that a general purpose numerical integration routine is not required
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.
Biased Metropolis Sampling for Rugged Free Energy Landscapes
NASA Astrophysics Data System (ADS)
Berg, Bernd A.
2003-11-01
Metropolis simulations of all-atom models of peptides (i.e. small proteins) are considered. Inspired by the funnel picture of Bryngelson and Wolyness, a transformation of the updating probabilities of the dihedral angles is defined, which uses probability densities from a higher temperature to improve the algorithmic performance at a lower temperature. The method is suitable for canonical as well as for generalized ensemble simulations. A simple approximation to the full transformation is tested at room temperature for Met-Enkephalin in vacuum. Integrated autocorrelation times are found to be reduced by factors close to two and a similar improvement due to generalized ensemble methods enters multiplicatively.
Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F Landis
2014-01-01
This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.
NASA Astrophysics Data System (ADS)
Xu, Jun; Dang, Chao; Kong, Fan
2017-10-01
This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.
A transformed path integral approach for solution of the Fokker-Planck equation
NASA Astrophysics Data System (ADS)
Subramaniam, Gnana M.; Vedula, Prakash
2017-10-01
A novel path integral (PI) based method for solution of the Fokker-Planck equation is presented. The proposed method, termed the transformed path integral (TPI) method, utilizes a new formulation for the underlying short-time propagator to perform the evolution of the probability density function (PDF) in a transformed computational domain where a more accurate representation of the PDF can be ensured. The new formulation, based on a dynamic transformation of the original state space with the statistics of the PDF as parameters, preserves the non-negativity of the PDF and incorporates short-time properties of the underlying stochastic process. New update equations for the state PDF in a transformed space and the parameters of the transformation (including mean and covariance) that better accommodate nonlinearities in drift and non-Gaussian behavior in distributions are proposed (based on properties of the SDE). Owing to the choice of transformation considered, the proposed method maps a fixed grid in transformed space to a dynamically adaptive grid in the original state space. The TPI method, in contrast to conventional methods such as Monte Carlo simulations and fixed grid approaches, is able to better represent the distributions (especially the tail information) and better address challenges in processes with large diffusion, large drift and large concentration of PDF. Additionally, in the proposed TPI method, error bounds on the probability in the computational domain can be obtained using the Chebyshev's inequality. The benefits of the TPI method over conventional methods are illustrated through simulations of linear and nonlinear drift processes in one-dimensional and multidimensional state spaces. The effects of spatial and temporal grid resolutions as well as that of the diffusion coefficient on the error in the PDF are also characterized.
Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.
1985-01-01
Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.
Challenges in leveraging existing human performance data for quantifying the IDHEAS HRA method
Liao, Huafei N.; Groth, Katrina; Stevens-Adams, Susan
2015-07-29
Our article documents an exploratory study for collecting and using human performance data to inform human error probability (HEP) estimates for a new human reliability analysis (HRA) method, the IntegrateD Human Event Analysis System (IDHEAS). The method was based on cognitive models and mechanisms underlying human behaviour and employs a framework of 14 crew failure modes (CFMs) to represent human failures typical for human performance in nuclear power plant (NPP) internal, at-power events [1]. A decision tree (DT) was constructed for each CFM to assess the probability of the CFM occurring in different contexts. Data needs for IDHEAS quantification aremore » discussed. Then, the data collection framework and process is described and how the collected data were used to inform HEP estimation is illustrated with two examples. Next, five major technical challenges are identified for leveraging human performance data for IDHEAS quantification. Furthermore, these challenges reflect the data needs specific to IDHEAS. More importantly, they also represent the general issues with current human performance data and can provide insight for a path forward to support HRA data collection, use, and exchange for HRA method development, implementation, and validation.« less
Integrating Terrain Maps Into a Reactive Navigation Strategy
NASA Technical Reports Server (NTRS)
Howard, Ayanna; Werger, Barry; Seraji, Homayoun
2006-01-01
An improved method of processing information for autonomous navigation of a robotic vehicle across rough terrain involves the integration of terrain maps into a reactive navigation strategy. Somewhat more precisely, the method involves the incorporation, into navigation logic, of data equivalent to regional traversability maps. The terrain characteristic is mapped using a fuzzy-logic representation of the difficulty of traversing the terrain. The method is robust in that it integrates a global path-planning strategy with sensor-based regional and local navigation strategies to ensure a high probability of success in reaching a destination and avoiding obstacles along the way. The sensor-based strategies use cameras aboard the vehicle to observe the regional terrain, defined as the area of the terrain that covers the immediate vicinity near the vehicle to a specified distance a few meters away.
Farnsworth, G.L.; Nichols, J.D.; Sauer, J.R.; Fancy, S.G.; Pollock, K.H.; Shriner, S.A.; Simons, T.R.; Ralph, C. John; Rich, Terrell D.
2005-01-01
Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point counts in favor of more intensive approaches to counting. However, over the past few years a variety of statistical and methodological developments have begun to provide practical ways of overcoming some of the problems with point counts. We describe some of these approaches, and show how they can be integrated into standard point count protocols to greatly enhance the quality of the information. Several tools now exist for estimation of detection probability of birds during counts, including distance sampling, double observer methods, time-depletion (removal) methods, and hybrid methods that combine these approaches. Many counts are conducted in habitats that make auditory detection of birds much more likely than visual detection. As a framework for understanding detection probability during such counts, we propose separating two components of the probability a bird is detected during a count into (1) the probability a bird vocalizes during the count and (2) the probability this vocalization is detected by an observer. In addition, we propose that some measure of the area sampled during a count is necessary for valid inferences about bird populations. This can be done by employing fixed-radius counts or more sophisticated distance-sampling models. We recommend any studies employing point counts be designed to estimate detection probability and to include a measure of the area sampled.
On Fitting a Multivariate Two-Part Latent Growth Model
Xu, Shu; Blozis, Shelley A.; Vandewater, Elizabeth A.
2017-01-01
A 2-part latent growth model can be used to analyze semicontinuous data to simultaneously study change in the probability that an individual engages in a behavior, and if engaged, change in the behavior. This article uses a Monte Carlo (MC) integration algorithm to study the interrelationships between the growth factors of 2 variables measured longitudinally where each variable can follow a 2-part latent growth model. A SAS macro implementing Mplus is developed to estimate the model to take into account the sampling uncertainty of this simulation-based computational approach. A sample of time-use data is used to show how maximum likelihood estimates can be obtained using a rectangular numerical integration method and an MC integration method. PMID:29333054
Electrofishing capture probability of smallmouth bass in streams
Dauwalter, D.C.; Fisher, W.L.
2007-01-01
Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.
Polynomial probability distribution estimation using the method of moments
Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
A Non-Simulation Based Method for Inducing Pearson’s Correlation Between Input Random Variables
2008-04-23
Systems 500 Auxillary Systems 600 Outfit & Furnishings 700 Weapons 800 Integration & Engineering 900 Ship Assembly & Support Total SWBS Description...Upside Probable Downside 000 Administration 100 Hull 200 Propulsion 300 Electric Plant 400 Electonics Systems 500 Auxillary Systems 600 Outfit
Integrated Processing in Planning and Understanding.
1986-12-01
to language analysis seemed necessary. The second observation was the rather commonsense one that it is easier to understand a foreign language ...syntactic analysis Probably the most widely employed method for natural language analysis is augmea ted transition network parsing, or ATNs (Thorne, Bratley...accomplished. It is for this reason that the programming language Prolog, which implements that general method , has proven so well-stilted to writing ATN
Organic First: A Biology-Friendly Chemistry Curriculum
ERIC Educational Resources Information Center
Reingold, I. David
2005-01-01
In this essay, the author describes to biologists the advantages of organic-first curriculum, on the assumption that few biologists are regular readers of "Journal of Chemistry Education" and therefore are probably unaware of the method for integrating chemistry and biology curricula. The author begins with the assumption that the majority of…
In our previous research, we showed that robust Bayesian methods can be used in environmental modeling to define a set of probability distributions for key parameters that captures the effects of expert disagreement, ambiguity, or ignorance. This entire set can then be update...
Controlled Trial Using Computerized Feedback to Improve Physicians' Diagnostic Judgments.
ERIC Educational Resources Information Center
Poses, Roy M.; And Others
1992-01-01
A study involving 14 experienced physicians investigated the effectiveness of a computer program (providing statistical feedback to teach a clinical diagnostic rule that predicts the probability of streptococcal pharyngitis), in conjunction with traditional lecture and periodic disease-prevalence reports. Results suggest the integrated method is a…
Huang, Xiaowei; Zhang, Yanling; Meng, Long; Abbott, Derek; Qian, Ming; Wong, Kelvin K L; Zheng, Rongqing; Zheng, Hairong; Niu, Lili
2017-01-01
Carotid plaque echogenicity is associated with the risk of cardiovascular events. Gray-scale median (GSM) of the ultrasound image of carotid plaques has been widely used as an objective method for evaluation of plaque echogenicity in patients with atherosclerosis. We proposed a computer-aided method to evaluate plaque echogenicity and compared its efficiency with GSM. One hundred and twenty-five carotid plaques (43 echo-rich, 35 intermediate, 47 echolucent) were collected from 72 patients in this study. The cumulative probability distribution curves were obtained based on statistics of the pixels in the gray-level images of plaques. The area under the cumulative probability distribution curve (AUCPDC) was calculated as its integral value to evaluate plaque echogenicity. The classification accuracy for three types of plaques is 78.4% (kappa value, κ = 0.673), when the AUCPDC is used for classifier training, whereas GSM is 64.8% (κ = 0.460). The receiver operating characteristic curves were produced to test the effectiveness of AUCPDC and GSM for the identification of echolucent plaques. The area under the curve (AUC) was 0.817 when AUCPDC was used for training the classifier, which is higher than that achieved using GSM (AUC = 0.746). Compared with GSM, the AUCPDC showed a borderline association with coronary heart disease (Spearman r = 0.234, p = 0.050). Our experimental results suggest that AUCPDC analysis is a promising method for evaluation of plaque echogenicity and predicting cardiovascular events in patients with plaques.
Testing Backwards Integration As A Method Of Age-Determination for KBO Families
NASA Astrophysics Data System (ADS)
Benfell, Nathan; Ragozzine, Darin
2017-10-01
The age of young asteroid collisional families is often determined by using backwards n-body integration of the solar system. This method is not used for discovering young asteroid families and is limited by the unpredictable influence of the Yarkovsky effect on individual specific asteroids over time. Since these limitations are not as important for objects in the Kuiper belt Marcus et al. 2011 suggested that backwards integration could be used to discover and characterize collisional families in the outer solar system. However, there are some minor effects that may be important to include in the integration to ensure a faithful reproduction of the actual solar system. We have created simulated families of Kuiper Belt objects through a forwards integration of various objects with identical starting locations and velocity distributions, based on the Haumea family. After carrying this integration forwards through ~4 Gyr, backwards integrations are used (1) to investigate which factors are of enough significance to require inclusion in the integration (e.g., terrestrial planets, KBO self-gravity, putative Planet 9, etc.), (2) to test orbital element clustering statistics and identify methods for assessing false alarm probabilities, and (3) to compare the age estimates with the known age of the simulated family to explore the viability of backwards integration for precise age estimates.
The statistical theory of the fracture of fragile bodies. Part 2: The integral equation method
NASA Technical Reports Server (NTRS)
Kittl, P.
1984-01-01
It is demonstrated how with the aid of a bending test, the Weibull fracture risk function can be determined - without postulating its analytical form - by resolving an integral equation. The respective solutions for rectangular and circular section beams are given. In the first case the function is expressed as an algorithm and in the second, in the form of series. Taking into account that the cumulative fracture probability appearing in the solution to the integral equation must be continuous and monotonically increasing, any case of fabrication or selection of samples can be treated.
Bayesian block-diagonal variable selection and model averaging
Papaspiliopoulos, O.; Rossell, D.
2018-01-01
Summary We propose a scalable algorithmic framework for exact Bayesian variable selection and model averaging in linear models under the assumption that the Gram matrix is block-diagonal, and as a heuristic for exploring the model space for general designs. In block-diagonal designs our approach returns the most probable model of any given size without resorting to numerical integration. The algorithm also provides a novel and efficient solution to the frequentist best subset selection problem for block-diagonal designs. Posterior probabilities for any number of models are obtained by evaluating a single one-dimensional integral, and other quantities of interest such as variable inclusion probabilities and model-averaged regression estimates are obtained by an adaptive, deterministic one-dimensional numerical integration. The overall computational cost scales linearly with the number of blocks, which can be processed in parallel, and exponentially with the block size, rendering it most adequate in situations where predictors are organized in many moderately-sized blocks. For general designs, we approximate the Gram matrix by a block-diagonal matrix using spectral clustering and propose an iterative algorithm that capitalizes on the block-diagonal algorithms to explore efficiently the model space. All methods proposed in this paper are implemented in the R library mombf. PMID:29861501
Ly, Cheng
2013-10-01
The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.
Computing thermal Wigner densities with the phase integration method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beutier, J.; Borgis, D.; Vuilleumier, R.
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less
Computing thermal Wigner densities with the phase integration method.
Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.
Best Merge Region Growing with Integrated Probabilistic Classification for Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.
2011-01-01
A new method for spectral-spatial classification of hyperspectral images is proposed. The method is based on the integration of probabilistic classification within the hierarchical best merge region growing algorithm. For this purpose, preliminary probabilistic support vector machines classification is performed. Then, hierarchical step-wise optimization algorithm is applied, by iteratively merging regions with the smallest Dissimilarity Criterion (DC). The main novelty of this method consists in defining a DC between regions as a function of region statistical and geometrical features along with classification probabilities. Experimental results are presented on a 200-band AVIRIS image of the Northwestern Indiana s vegetation area and compared with those obtained by recently proposed spectral-spatial classification techniques. The proposed method improves classification accuracies when compared to other classification approaches.
Architecture for Integrated Medical Model Dynamic Probabilistic Risk Assessment
NASA Technical Reports Server (NTRS)
Jaworske, D. A.; Myers, J. G.; Goodenow, D.; Young, M.; Arellano, J. D.
2016-01-01
Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.
NASA Astrophysics Data System (ADS)
Utama, Briandhika; Purqon, Acep
2016-08-01
Path Integral is a method to transform a function from its initial condition to final condition through multiplying its initial condition with the transition probability function, known as propagator. At the early development, several studies focused to apply this method for solving problems only in Quantum Mechanics. Nevertheless, Path Integral could also apply to other subjects with some modifications in the propagator function. In this study, we investigate the application of Path Integral method in financial derivatives, stock options. Black-Scholes Model (Nobel 1997) was a beginning anchor in Option Pricing study. Though this model did not successfully predict option price perfectly, especially because its sensitivity for the major changing on market, Black-Scholes Model still is a legitimate equation in pricing an option. The derivation of Black-Scholes has a high difficulty level because it is a stochastic partial differential equation. Black-Scholes equation has a similar principle with Path Integral, where in Black-Scholes the share's initial price is transformed to its final price. The Black-Scholes propagator function then derived by introducing a modified Lagrange based on Black-Scholes equation. Furthermore, we study the correlation between path integral analytical solution and Monte-Carlo numeric solution to find the similarity between this two methods.
NASA Astrophysics Data System (ADS)
Angraini, Lily Maysari; Suparmi, Variani, Viska Inda
2010-12-01
SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.
NASA Astrophysics Data System (ADS)
He, Jingjing; Wang, Dengjiang; Zhang, Weifang
2015-03-01
This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.
NASA Astrophysics Data System (ADS)
Volkov, Sergey
2017-11-01
This paper presents a new method of numerical computation of the mass-independent QED contributions to the electron anomalous magnetic moment which arise from Feynman graphs without closed electron loops. The method is based on a forestlike subtraction formula that removes all ultraviolet and infrared divergences in each Feynman graph before integration in Feynman-parametric space. The integration is performed by an importance sampling Monte-Carlo algorithm with the probability density function that is constructed for each Feynman graph individually. The method is fully automated at any order of the perturbation series. The results of applying the method to 2-loop, 3-loop, 4-loop Feynman graphs, and to some individual 5-loop graphs are presented, as well as the comparison of this method with other ones with respect to Monte Carlo convergence speed.
Pretest probability estimation in the evaluation of patients with possible deep vein thrombosis.
Vinson, David R; Patel, Jason P; Irving, Cedric S
2011-07-01
An estimation of pretest probability is integral to the proper interpretation of a negative compression ultrasound in the diagnostic assessment of lower-extremity deep vein thrombosis. We sought to determine the rate, method, and predictors of pretest probability estimation in such patients. This cross-sectional study of outpatients was conducted in a suburban community hospital in 2006. Estimation of pretest probability was done by enzyme-linked immunosorbent assay d-dimer, Wells criteria, and unstructured clinical impression. Using logistic regression analysis, we measured predictors of documented risk assessment. A cohort analysis was undertaken to compare 3-month thromboembolic outcomes between risk groups. Among 524 cases, 289 (55.2%) underwent pretest probability estimation using the following methods: enzyme-linked immunosorbent assay d-dimer (228; 43.5%), clinical impression (106; 20.2%), and Wells criteria (24; 4.6%), with 69 (13.2%) patients undergoing a combination of at least two methods. Patient factors were not predictive of pretest probability estimation, but the specialty of the clinician was predictive; emergency physicians (P < .0001) and specialty clinicians (P = .001) were less likely than primary care clinicians to perform risk assessment. Thromboembolic events within 3 months were experienced by 0 of 52 patients in the explicitly low-risk group, 4 (1.8%) of 219 in the explicitly moderate- to high-risk group, and 1 (0.4%) of 226 in the group that did not undergo explicit risk assessment. Negative ultrasounds in the workup of deep vein thrombosis are commonly interpreted in isolation apart from pretest probability estimations. Risk assessments varied by physician specialties. Opportunities exist for improvement in the diagnostic evaluation of these patients. Copyright © 2011 Elsevier Inc. All rights reserved.
Unified framework for information integration based on information geometry
Oizumi, Masafumi; Amari, Shun-ichi
2016-01-01
Assessment of causal influences is a ubiquitous and important subject across diverse research fields. Drawn from consciousness studies, integrated information is a measure that defines integration as the degree of causal influences among elements. Whereas pairwise causal influences between elements can be quantified with existing methods, quantifying multiple influences among many elements poses two major mathematical difficulties. First, overestimation occurs due to interdependence among influences if each influence is separately quantified in a part-based manner and then simply summed over. Second, it is difficult to isolate causal influences while avoiding noncausal confounding influences. To resolve these difficulties, we propose a theoretical framework based on information geometry for the quantification of multiple causal influences with a holistic approach. We derive a measure of integrated information, which is geometrically interpreted as the divergence between the actual probability distribution of a system and an approximated probability distribution where causal influences among elements are statistically disconnected. This framework provides intuitive geometric interpretations harmonizing various information theoretic measures in a unified manner, including mutual information, transfer entropy, stochastic interaction, and integrated information, each of which is characterized by how causal influences are disconnected. In addition to the mathematical assessment of consciousness, our framework should help to analyze causal relationships in complex systems in a complete and hierarchical manner. PMID:27930289
2017-01-01
Abstract Objectives: This study examined race differences in the probability of belonging to a specific social network typology of family, friends, and church members. Method: Samples of African Americans, Caribbean blacks, and non-Hispanic whites aged 55+ were drawn from the National Survey of American Life. Typology indicators related to social integration and negative interactions with family, friendship, and church networks were used. Latent class analysis was used to identify typologies, and latent class multinomial logistic regression was used to assess the influence of race, and interactions between race and age, and race and education on typology membership. Results: Four network typologies were identified: optimal (high social integration, low negative interaction), family-centered (high social integration within primarily the extended family network, low negative interaction), strained (low social integration, high negative interaction), and ambivalent (high social integration and high negative interaction). Findings for race and age and race and education interactions indicated that the effects of education and age on typology membership varied by race. Discussion: Overall, the findings demonstrate how race interacts with age and education to influence the probability of belonging to particular network types. A better understanding of the influence of race, education, and age on social network typologies will inform future research and theoretical developments in this area. PMID:28329871
Probabilistic data integration and computational complexity
NASA Astrophysics Data System (ADS)
Hansen, T. M.; Cordua, K. S.; Mosegaard, K.
2016-12-01
Inverse problems in Earth Sciences typically refer to the problem of inferring information about properties of the Earth from observations of geophysical data (the result of nature's solution to the `forward' problem). This problem can be formulated more generally as a problem of `integration of information'. A probabilistic formulation of data integration is in principle simple: If all information available (from e.g. geology, geophysics, remote sensing, chemistry…) can be quantified probabilistically, then different algorithms exist that allow solving the data integration problem either through an analytical description of the combined probability function, or sampling the probability function. In practice however, probabilistic based data integration may not be easy to apply successfully. This may be related to the use of sampling methods, which are known to be computationally costly. But, another source of computational complexity is related to how the individual types of information are quantified. In one case a data integration problem is demonstrated where the goal is to determine the existence of buried channels in Denmark, based on multiple sources of geo-information. Due to one type of information being too informative (and hence conflicting), this leads to a difficult sampling problems with unrealistic uncertainty. Resolving this conflict prior to data integration, leads to an easy data integration problem, with no biases. In another case it is demonstrated how imperfections in the description of the geophysical forward model (related to solving the wave-equation) can lead to a difficult data integration problem, with severe bias in the results. If the modeling error is accounted for, the data integration problems becomes relatively easy, with no apparent biases. Both examples demonstrate that biased information can have a dramatic effect on the computational efficiency solving a data integration problem and lead to biased results, and under-estimation of uncertainty. However, in both examples, one can also analyze the performance of the sampling methods used to solve the data integration problem to indicate the existence of biased information. This can be used actively to avoid biases in the available information and subsequently in the final uncertainty evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonen, E.P.; Johnson, K.I.; Simonen, F.A.
The Vessel Integrity Simulation Analysis (VISA-II) code was developed to allow calculations of the failure probability of a reactor pressure vessel subject to defined pressure/temperature transients. A version of the code, revised by Pacific Northwest Laboratory for the US Nuclear Regulatory Commission, was used to evaluate the sensitivities of calculated through-wall flaw probability to material, flaw and calculational assumptions. Probabilities were more sensitive to flaw assumptions than to material or calculational assumptions. Alternative flaw assumptions changed the probabilities by one to two orders of magnitude, whereas alternative material assumptions typically changed the probabilities by a factor of two or less.more » Flaw shape, flaw through-wall position and flaw inspection were sensitivities examined. Material property sensitivities included the assumed distributions in copper content and fracture toughness. Methods of modeling flaw propagation that were evaluated included arrest/reinitiation toughness correlations, multiple toughness values along the length of a flaw, flaw jump distance for each computer simulation and added error in estimating irradiated properties caused by the trend curve correlation error.« less
An integrated framework for detecting suspicious behaviors in video surveillance
NASA Astrophysics Data System (ADS)
Zin, Thi Thi; Tin, Pyke; Hama, Hiromitsu; Toriu, Takashi
2014-03-01
In this paper, we propose an integrated framework for detecting suspicious behaviors in video surveillance systems which are established in public places such as railway stations, airports, shopping malls and etc. Especially, people loitering in suspicion, unattended objects left behind and exchanging suspicious objects between persons are common security concerns in airports and other transit scenarios. These involve understanding scene/event, analyzing human movements, recognizing controllable objects, and observing the effect of the human movement on those objects. In the proposed framework, multiple background modeling technique, high level motion feature extraction method and embedded Markov chain models are integrated for detecting suspicious behaviors in real time video surveillance systems. Specifically, the proposed framework employs probability based multiple backgrounds modeling technique to detect moving objects. Then the velocity and distance measures are computed as the high level motion features of the interests. By using an integration of the computed features and the first passage time probabilities of the embedded Markov chain, the suspicious behaviors in video surveillance are analyzed for detecting loitering persons, objects left behind and human interactions such as fighting. The proposed framework has been tested by using standard public datasets and our own video surveillance scenarios.
A fast and high performance multiple data integration algorithm for identifying human disease genes
2015-01-01
Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620
NASA Astrophysics Data System (ADS)
Goodman, Joseph W.
2000-07-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research
Finite-time state feedback stabilisation of stochastic high-order nonlinear feedforward systems
NASA Astrophysics Data System (ADS)
Xie, Xue-Jun; Zhang, Xing-Hui; Zhang, Kemei
2016-07-01
This paper studies the finite-time state feedback stabilisation of stochastic high-order nonlinear feedforward systems. Based on the stochastic Lyapunov theorem on finite-time stability, by using the homogeneous domination method, the adding one power integrator and sign function method, constructing a ? Lyapunov function and verifying the existence and uniqueness of solution, a continuous state feedback controller is designed to guarantee the closed-loop system finite-time stable in probability.
NESSUS/EXPERT - An expert system for probabilistic structural analysis methods
NASA Technical Reports Server (NTRS)
Millwater, H.; Palmer, K.; Fink, P.
1988-01-01
An expert system (NESSUS/EXPERT) is presented which provides assistance in using probabilistic structural analysis methods. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator. NESSUS/EXPERT was developed with a combination of FORTRAN and CLIPS, a C language expert system tool, to exploit the strengths of each language.
Ocampo-Duque, William; Osorio, Carolina; Piamba, Christian; Schuhmacher, Marta; Domingo, José L
2013-02-01
The integration of water quality monitoring variables is essential in environmental decision making. Nowadays, advanced techniques to manage subjectivity, imprecision, uncertainty, vagueness, and variability are required in such complex evaluation process. We here propose a probabilistic fuzzy hybrid model to assess river water quality. Fuzzy logic reasoning has been used to compute a water quality integrative index. By applying a Monte Carlo technique, based on non-parametric probability distributions, the randomness of model inputs was estimated. Annual histograms of nine water quality variables were built with monitoring data systematically collected in the Colombian Cauca River, and probability density estimations using the kernel smoothing method were applied to fit data. Several years were assessed, and river sectors upstream and downstream the city of Santiago de Cali, a big city with basic wastewater treatment and high industrial activity, were analyzed. The probabilistic fuzzy water quality index was able to explain the reduction in water quality, as the river receives a larger number of agriculture, domestic, and industrial effluents. The results of the hybrid model were compared to traditional water quality indexes. The main advantage of the proposed method is that it considers flexible boundaries between the linguistic qualifiers used to define the water status, being the belongingness of water quality to the diverse output fuzzy sets or classes provided with percentiles and histograms, which allows classify better the real water condition. The results of this study show that fuzzy inference systems integrated to stochastic non-parametric techniques may be used as complementary tools in water quality indexing methodologies. Copyright © 2012 Elsevier Ltd. All rights reserved.
An integrated method for atherosclerotic carotid plaque segmentation in ultrasound image.
Qian, Chunjun; Yang, Xiaoping
2018-01-01
Carotid artery atherosclerosis is an important cause of stroke. Ultrasound imaging has been widely used in the diagnosis of atherosclerosis. Therefore, segmenting atherosclerotic carotid plaque in ultrasound image is an important task. Accurate plaque segmentation is helpful for the measurement of carotid plaque burden. In this paper, we propose and evaluate a novel learning-based integrated framework for plaque segmentation. In our study, four different classification algorithms, along with the auto-context iterative algorithm, were employed to effectively integrate features from ultrasound images and later also the iteratively estimated and refined probability maps together for pixel-wise classification. The four classification algorithms were support vector machine with linear kernel, support vector machine with radial basis function kernel, AdaBoost and random forest. The plaque segmentation was implemented in the generated probability map. The performance of the four different learning-based plaque segmentation methods was tested on 29 B-mode ultrasound images. The evaluation indices for our proposed methods were consisted of sensitivity, specificity, Dice similarity coefficient, overlap index, error of area, absolute error of area, point-to-point distance, and Hausdorff point-to-point distance, along with the area under the ROC curve. The segmentation method integrated the random forest and an auto-context model obtained the best results (sensitivity 80.4 ± 8.4%, specificity 96.5 ± 2.0%, Dice similarity coefficient 81.0 ± 4.1%, overlap index 68.3 ± 5.8%, error of area -1.02 ± 18.3%, absolute error of area 14.7 ± 10.9%, point-to-point distance 0.34 ± 0.10 mm, Hausdorff point-to-point distance 1.75 ± 1.02 mm, and area under the ROC curve 0.897), which were almost the best, compared with that from the existed methods. Our proposed learning-based integrated framework investigated in this study could be useful for atherosclerotic carotid plaque segmentation, which will be helpful for the measurement of carotid plaque burden. Copyright © 2017 Elsevier B.V. All rights reserved.
Unmanned aircraft system sense and avoid integrity and continuity
NASA Astrophysics Data System (ADS)
Jamoom, Michael B.
This thesis describes new methods to guarantee safety of sense and avoid (SAA) functions for Unmanned Aircraft Systems (UAS) by evaluating integrity and continuity risks. Previous SAA efforts focused on relative safety metrics, such as risk ratios, comparing the risk of using an SAA system versus not using it. The methods in this thesis evaluate integrity and continuity risks as absolute measures of safety, as is the established practice in commercial aircraft terminal area navigation applications. The main contribution of this thesis is a derivation of a new method, based on a standard intruder relative constant velocity assumption, that uses hazard state estimates and estimate error covariances to establish (1) the integrity risk of the SAA system not detecting imminent loss of '"well clear," which is the time and distance required to maintain safe separation from intruder aircraft, and (2) the probability of false alert, the continuity risk. Another contribution is applying these integrity and continuity risk evaluation methods to set quantifiable and certifiable safety requirements on sensors. A sensitivity analysis uses this methodology to evaluate the impact of sensor errors on integrity and continuity risks. The penultimate contribution is an integrity and continuity risk evaluation where the estimation model is refined to address realistic intruder relative linear accelerations, which goes beyond the current constant velocity standard. The final contribution is an integrity and continuity risk evaluation addressing multiple intruders. This evaluation is a new innovation-based method to determine the risk of mis-associating intruder measurements. A mis-association occurs when the SAA system incorrectly associates a measurement to the wrong intruder, causing large errors in the estimated intruder trajectories. The new methods described in this thesis can help ensure safe encounters between aircraft and enable SAA sensor certification for UAS integration into the National Airspace System.
Security Threat Assessment of an Internet Security System Using Attack Tree and Vague Sets
2014-01-01
Security threat assessment of the Internet security system has become a greater concern in recent years because of the progress and diversification of information technology. Traditionally, the failure probabilities of bottom events of an Internet security system are treated as exact values when the failure probability of the entire system is estimated. However, security threat assessment when the malfunction data of the system's elementary event are incomplete—the traditional approach for calculating reliability—is no longer applicable. Moreover, it does not consider the failure probability of the bottom events suffered in the attack, which may bias conclusions. In order to effectively solve the problem above, this paper proposes a novel technique, integrating attack tree and vague sets for security threat assessment. For verification of the proposed approach, a numerical example of an Internet security system security threat assessment is adopted in this paper. The result of the proposed method is compared with the listing approaches of security threat assessment methods. PMID:25405226
Security threat assessment of an Internet security system using attack tree and vague sets.
Chang, Kuei-Hu
2014-01-01
Security threat assessment of the Internet security system has become a greater concern in recent years because of the progress and diversification of information technology. Traditionally, the failure probabilities of bottom events of an Internet security system are treated as exact values when the failure probability of the entire system is estimated. However, security threat assessment when the malfunction data of the system's elementary event are incomplete--the traditional approach for calculating reliability--is no longer applicable. Moreover, it does not consider the failure probability of the bottom events suffered in the attack, which may bias conclusions. In order to effectively solve the problem above, this paper proposes a novel technique, integrating attack tree and vague sets for security threat assessment. For verification of the proposed approach, a numerical example of an Internet security system security threat assessment is adopted in this paper. The result of the proposed method is compared with the listing approaches of security threat assessment methods.
Time Dependence of Collision Probabilities During Satellite Conjunctions
NASA Technical Reports Server (NTRS)
Hall, Doyle T.; Hejduk, Matthew D.; Johnson, Lauren C.
2017-01-01
The NASA Conjunction Assessment Risk Analysis (CARA) team has recently implemented updated software to calculate the probability of collision (P (sub c)) for Earth-orbiting satellites. The algorithm can employ complex dynamical models for orbital motion, and account for the effects of non-linear trajectories as well as both position and velocity uncertainties. This “3D P (sub c)” method entails computing a 3-dimensional numerical integral for each estimated probability. Our analysis indicates that the 3D method provides several new insights over the traditional “2D P (sub c)” method, even when approximating the orbital motion using the relatively simple Keplerian two-body dynamical model. First, the formulation provides the means to estimate variations in the time derivative of the collision probability, or the probability rate, R (sub c). For close-proximity satellites, such as those orbiting in formations or clusters, R (sub c) variations can show multiple peaks that repeat or blend with one another, providing insight into the ongoing temporal distribution of risk. For single, isolated conjunctions, R (sub c) analysis provides the means to identify and bound the times of peak collision risk. Additionally, analysis of multiple actual archived conjunctions demonstrates that the commonly used “2D P (sub c)” approximation can occasionally provide inaccurate estimates. These include cases in which the 2D method yields negligibly small probabilities (e.g., P (sub c)) is greater than 10 (sup -10)), but the 3D estimates are sufficiently large to prompt increased monitoring or collision mitigation (e.g., P (sub c) is greater than or equal to 10 (sup -5)). Finally, the archive analysis indicates that a relatively efficient calculation can be used to identify which conjunctions will have negligibly small probabilities. This small-P (sub c) screening test can significantly speed the overall risk analysis computation for large numbers of conjunctions.
Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2006-01-01
The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted
Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian
2017-03-01
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.
Hey, Jody; Nielsen, Rasmus
2007-01-01
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
Path integration mediated systematic search: a Bayesian model.
Vickerstaff, Robert J; Merkle, Tobias
2012-08-21
The systematic search behaviour is a backup system that increases the chances of desert ants finding their nest entrance after foraging when the path integrator has failed to guide them home accurately enough. Here we present a mathematical model of the systematic search that is based on extensive behavioural studies in North African desert ants Cataglyphis fortis. First, a simple search heuristic utilising Bayesian inference and a probability density function is developed. This model, which optimises the short-term nest detection probability, is then compared to three simpler search heuristics and to recorded search patterns of Cataglyphis ants. To compare the different searches a method to quantify search efficiency is established as well as an estimate of the error rate in the ants' path integrator. We demonstrate that the Bayesian search heuristic is able to automatically adapt to increasing levels of positional uncertainty to produce broader search patterns, just as desert ants do, and that it outperforms the three other search heuristics tested. The searches produced by it are also arguably the most similar in appearance to the ant's searches. Copyright © 2012 Elsevier Ltd. All rights reserved.
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
Reducing the Risk of Human Space Missions with INTEGRITY
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Dillon-Merill, Robin L.; Tri, Terry O.; Henninger, Donald L.
2003-01-01
The INTEGRITY Program will design and operate a test bed facility to help prepare for future beyond-LEO missions. The purpose of INTEGRITY is to enable future missions by developing, testing, and demonstrating advanced human space systems. INTEGRITY will also implement and validate advanced management techniques including risk analysis and mitigation. One important way INTEGRITY will help enable future missions is by reducing their risk. A risk analysis of human space missions is important in defining the steps that INTEGRITY should take to mitigate risk. This paper describes how a Probabilistic Risk Assessment (PRA) of human space missions will help support the planning and development of INTEGRITY to maximize its benefits to future missions. PRA is a systematic methodology to decompose the system into subsystems and components, to quantify the failure risk as a function of the design elements and their corresponding probability of failure. PRA provides a quantitative estimate of the probability of failure of the system, including an assessment and display of the degree of uncertainty surrounding the probability. PRA provides a basis for understanding the impacts of decisions that affect safety, reliability, performance, and cost. Risks with both high probability and high impact are identified as top priority. The PRA of human missions beyond Earth orbit will help indicate how the risk of future human space missions can be reduced by integrating and testing systems in INTEGRITY.
Chandrasekar, A; Rakkiyappan, R; Cao, Jinde
2015-10-01
This paper studies the impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach. The array of neural networks are coupled in a random fashion which is governed by Bernoulli random variable. The aim of this paper is to obtain the synchronization criteria, which is suitable for both exactly known and partly unknown transition probabilities such that the coupled neural network is synchronized with mixed time-delay. The considered impulsive effects can be synchronized at partly unknown transition probabilities. Besides, a multiple integral approach is also proposed to strengthen the Markovian jumping randomly coupled neural networks with partly unknown transition probabilities. By making use of Kronecker product and some useful integral inequalities, a novel Lyapunov-Krasovskii functional was designed for handling the coupled neural network with mixed delay and then impulsive synchronization criteria are solvable in a set of linear matrix inequalities. Finally, numerical examples are presented to illustrate the effectiveness and advantages of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nested Sampling for Bayesian Model Comparison in the Context of Salmonella Disease Dynamics
Dybowski, Richard; McKinley, Trevelyan J.; Mastroeni, Pietro; Restif, Olivier
2013-01-01
Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike's Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered. PMID:24376528
Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign
2007-01-01
Background Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds) that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for pairwise RNA structure prediction methods in a principled fashion. These constraints can reduce the computational and memory requirements of these methods while maintaining or improving their accuracy of structural prediction. This extends the practical reach of these methods to longer length sequences. The revised Dynalign code is freely available for download. PMID:17445273
Failure probability analysis of optical grid
NASA Astrophysics Data System (ADS)
Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng
2008-11-01
Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.
2015-01-01
In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152
He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian
2013-09-01
The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents with various solvent extracts. The TQSMSS can characterize the sample similarity, by which we can quantitate the correct probability with the test of power under to make positive and negative conclusions no matter the samples come from same population under confident coefficient a or not, by which we can realize an analysis at both macroscopic and microcosmic levels, as an important similar analytical method for medical theoretical research.
NASA Astrophysics Data System (ADS)
Wei, J. Q.; Cong, Y. C.; Xiao, M. Q.
2018-05-01
As renewable energies are increasingly integrated into power systems, there is increasing interest in stochastic analysis of power systems.Better techniques should be developed to account for the uncertainty caused by penetration of renewables and consequently analyse its impacts on stochastic stability of power systems. In this paper, the Stochastic Differential Equations (SDEs) are used to represent the evolutionary behaviour of the power systems. The stationary Probability Density Function (PDF) solution to SDEs modelling power systems excited by Gaussian white noise is analysed. Subjected to such random excitation, the Joint Probability Density Function (JPDF) solution to the phase angle and angular velocity is governed by the generalized Fokker-Planck-Kolmogorov (FPK) equation. To solve this equation, the numerical method is adopted. Special measure is taken such that the generalized FPK equation is satisfied in the average sense of integration with the assumed PDF. Both weak and strong intensities of the stochastic excitations are considered in a single machine infinite bus power system. The numerical analysis has the same result as the one given by the Monte Carlo simulation. Potential studies on stochastic behaviour of multi-machine power systems with random excitations are discussed at the end.
Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas
2002-05-01
In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan.
Sajn, Luka; Kukar, Matjaž
2011-12-01
The paper presents results of our long-term study on using image processing and data mining methods in a medical imaging. Since evaluation of modern medical images is becoming increasingly complex, advanced analytical and decision support tools are involved in integration of partial diagnostic results. Such partial results, frequently obtained from tests with substantial imperfections, are integrated into ultimate diagnostic conclusion about the probability of disease for a given patient. We study various topics such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform medical practice. Our long-term study reveals three significant milestones. The first improvement was achieved by significantly increasing post-test diagnostic probabilities with respect to expert physicians. The second, even more significant improvement utilizes multi-resolution image parametrization. Machine learning methods in conjunction with the feature subset selection on these parameters significantly improve diagnostic performance. However, further feature construction with the principle component analysis on these features elevates results to an even higher accuracy level that represents the third milestone. With the proposed approach clinical results are significantly improved throughout the study. The most significant result of our study is improvement in the diagnostic power of the whole diagnostic process. Our compound approach aids, but does not replace, the physician's judgment and may assist in decisions on cost effectiveness of tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lockett, P.B.
1989-01-01
The escape probability formalism is used in this dissertation to treat two problems in astrophysical radiative transfer. The first problem concerns line overlap, which occurs when two or more spectral lines lie close enough together that there is a significant probability that a photon emitted in one of the lines can be absorbed in another. The second problem involves creating a detailed model of the masers around the supergiant star, VX Sgr. The author has developed an escape probability procedure that accounts for the effects of line overlap by integrating the amount of absorption in each of the overlapping lines.more » This method was used to test the accuracy of a simpler escape probability formalism developed by Elitzur and Netzer that utilized rectangular line profiles. Good agreement between the two methods was found for a wide range of physical conditions. The more accurate method was also used to examine the effects of line overlap of the far infrared lines of the OH molecule. This overlap did have important effects on the level populations and could cause maser emission. He has also developed a detailed model of the OH 1612 and water masers around VX Sgr. He found that the masers can be adequately explained using reasonable estimates for the physical parameters. He also was able to provide a tighter constraint on the highly uncertain mass loss rate from the star. He had less success modeling the SiO masers. His explanation will require a more exact method of treating the many levels involved and also a more accurate knowledge of the relevant physical input parameters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lockett, P.B.
1989-01-01
The escape probability formalism is used to treat two problems in astrophysical radiative transfer. The first problem concerns line overlap, which occurs when two or more spectral lines lie close enough together that there is a significant probability that a photon emitted in one of the lines can be absorbed in another. The second problem involved creating a detailed model of the masers around the supergiant star, VX Sgr. An escape probability procedure was developed that accounts for the effects of line overlap by integrating the amount of absorption in each of the overlapping lines. This method was used tomore » test the accuracy of a simpler escape probability formalism developed by Elitzur and Netzer that utilized rectangular line profiles. Good agreement between the two methods was found for a wide range of physical conditions. The more accurate method was also used to examine the effects of line overlap of the far infrared lines of the OH molecule. This overlap did have important effects on the level populations and could cause maser emission. A detailed model of the OH 1612 and water masers around VX Sgr were also developed. The masers can be adequately explained using reasonable estimates for the physical parameters. It is possible to provide a tighter constraint on the highly uncertain mass loss rate from the star. Modeling the SiO masers was less successful. Their explanation will require a more exact method of treating the many levels involved and also a more accurate knowledge of the relevant physical input parameters.« less
Probabilistic Simulation of Progressive Fracture in Bolted-Joint Composite Laminates
NASA Technical Reports Server (NTRS)
Minnetyan, L.; Singhal, S. N.; Chamis, C. C.
1996-01-01
This report describes computational methods to probabilistically simulate fracture in bolted composite structures. An innovative approach that is independent of stress intensity factors and fracture toughness was used to simulate progressive fracture. The effect of design variable uncertainties on structural damage was also quantified. A fast probability integrator assessed the scatter in the composite structure response before and after damage. Then the sensitivity of the response to design variables was computed. General-purpose methods, which are applicable to bolted joints in all types of structures and in all fracture processes-from damage initiation to unstable propagation and global structure collapse-were used. These methods were demonstrated for a bolted joint of a polymer matrix composite panel under edge loads. The effects of the fabrication process were included in the simulation of damage in the bolted panel. Results showed that the most effective way to reduce end displacement at fracture is to control both the load and the ply thickness. The cumulative probability for longitudinal stress in all plies was most sensitive to the load; in the 0 deg. plies it was very sensitive to ply thickness. The cumulative probability for transverse stress was most sensitive to the matrix coefficient of thermal expansion. In addition, fiber volume ratio and fiber transverse modulus both contributed significantly to the cumulative probability for the transverse stresses in all the plies.
NASA Astrophysics Data System (ADS)
Gao, Shengguo; Zhu, Zhongli; Liu, Shaomin; Jin, Rui; Yang, Guangchao; Tan, Lei
2014-10-01
Soil moisture (SM) plays a fundamental role in the land-atmosphere exchange process. Spatial estimation based on multi in situ (network) data is a critical way to understand the spatial structure and variation of land surface soil moisture. Theoretically, integrating densely sampled auxiliary data spatially correlated with soil moisture into the procedure of spatial estimation can improve its accuracy. In this study, we present a novel approach to estimate the spatial pattern of soil moisture by using the BME method based on wireless sensor network data and auxiliary information from ASTER (Terra) land surface temperature measurements. For comparison, three traditional geostatistic methods were also applied: ordinary kriging (OK), which used the wireless sensor network data only, regression kriging (RK) and ordinary co-kriging (Co-OK) which both integrated the ASTER land surface temperature as a covariate. In Co-OK, LST was linearly contained in the estimator, in RK, estimator is expressed as the sum of the regression estimate and the kriged estimate of the spatially correlated residual, but in BME, the ASTER land surface temperature was first retrieved as soil moisture based on the linear regression, then, the t-distributed prediction interval (PI) of soil moisture was estimated and used as soft data in probability form. The results indicate that all three methods provide reasonable estimations. Co-OK, RK and BME can provide a more accurate spatial estimation by integrating the auxiliary information Compared to OK. RK and BME shows more obvious improvement compared to Co-OK, and even BME can perform slightly better than RK. The inherent issue of spatial estimation (overestimation in the range of low values and underestimation in the range of high values) can also be further improved in both RK and BME. We can conclude that integrating auxiliary data into spatial estimation can indeed improve the accuracy, BME and RK take better advantage of the auxiliary information compared to Co-OK, and BME outperforms RK by integrating the auxiliary data in a probability form.
NASA Technical Reports Server (NTRS)
Townsend, J.; Meyers, C.; Ortega, R.; Peck, J.; Rheinfurth, M.; Weinstock, B.
1993-01-01
Probabilistic structural analyses and design methods are steadily gaining acceptance within the aerospace industry. The safety factor approach to design has long been the industry standard, and it is believed by many to be overly conservative and thus, costly. A probabilistic approach to design may offer substantial cost savings. This report summarizes several probabilistic approaches: the probabilistic failure analysis (PFA) methodology developed by Jet Propulsion Laboratory, fast probability integration (FPI) methods, the NESSUS finite element code, and response surface methods. Example problems are provided to help identify the advantages and disadvantages of each method.
Theoretical cratering rates on Ida, Mathilde, Eros and Gaspra
NASA Astrophysics Data System (ADS)
Jeffers, S. V.; Asher, D. J.; Bailey, M. E.
2002-11-01
We investigate the main influences on crater size distributions, by deriving results for the four example target objects, (951) Gaspra, (243) Ida, (253) Mathilde and (433) Eros. The dynamical history of each of these asteroids is modelled using the MERCURY (Chambers 1999) numerical integrator. The use of an efficient, Öpik-type, collision code enables the calculation of a velocity histogram and the probability of impact. This when combined with a crater scaling law and an impactor size distribution, through a Monte Carlo method, results in a crater size distribution. The resulting crater probability distributions are in good agreement with observed crater distributions on these asteroids.
Three paths toward the quantum angle operator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gazeau, Jean Pierre, E-mail: gazeau@apc.univ-paris7.fr; Szafraniec, Franciszek Hugon, E-mail: franciszek.szafraniec@uj.edu.pl
2016-12-15
We examine mathematical questions around angle (or phase) operator associated with a number operator through a short list of basic requirements. We implement three methods of construction of quantum angle. The first one is based on operator theory and parallels the definition of angle for the upper half-circle through its cosine and completed by a sign inversion. The two other methods are integral quantization generalizing in a certain sense the Berezin–Klauder approaches. One method pertains to Weyl–Heisenberg integral quantization of the plane viewed as the phase space of the motion on the line. It depends on a family of “weight”more » functions on the plane. The third method rests upon coherent state quantization of the cylinder viewed as the phase space of the motion on the circle. The construction of these coherent states depends on a family of probability distributions on the line.« less
NASA Astrophysics Data System (ADS)
Liu, Xiao-Ming; Jiang, Jun; Hong, Ling; Tang, Dafeng
In this paper, a new method of Generalized Cell Mapping with Sampling-Adaptive Interpolation (GCMSAI) is presented in order to enhance the efficiency of the computation of one-step probability transition matrix of the Generalized Cell Mapping method (GCM). Integrations with one mapping step are replaced by sampling-adaptive interpolations of third order. An explicit formula of interpolation error is derived for a sampling-adaptive control to switch on integrations for the accuracy of computations with GCMSAI. By applying the proposed method to a two-dimensional forced damped pendulum system, global bifurcations are investigated with observations of boundary metamorphoses including full to partial and partial to partial as well as the birth of fully Wada boundary. Moreover GCMSAI requires a computational time of one thirtieth up to one fiftieth compared to that of the previous GCM.
The Extraction of Post-Earthquake Building Damage Informatiom Based on Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Chen, M.; Wang, X.; Dou, A.; Wu, X.
2018-04-01
The seismic damage information of buildings extracted from remote sensing (RS) imagery is meaningful for supporting relief and effective reduction of losses caused by earthquake. Both traditional pixel-based and object-oriented methods have some shortcoming in extracting information of object. Pixel-based method can't make fully use of contextual information of objects. Object-oriented method faces problem that segmentation of image is not ideal, and the choice of feature space is difficult. In this paper, a new stratage is proposed which combines Convolution Neural Network (CNN) with imagery segmentation to extract building damage information from remote sensing imagery. the key idea of this method includes two steps. First to use CNN to predicate the probability of each pixel and then integrate the probability within each segmentation spot. The method is tested through extracting the collapsed building and uncollapsed building from the aerial image which is acquired in Longtoushan Town after Ms 6.5 Ludian County, Yunnan Province earthquake. The results show that the proposed method indicates its effectiveness in extracting damage information of buildings after earthquake.
Towards a Probabilistic Preliminary Design Criterion for Buckling Critical Composite Shells
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Hilburger, Mark W.
2003-01-01
A probability-based analysis method for predicting buckling loads of compression-loaded laminated-composite shells is presented, and its potential as a basis for a new shell-stability design criterion is demonstrated and discussed. In particular, a database containing information about specimen geometry, material properties, and measured initial geometric imperfections for a selected group of laminated-composite cylindrical shells is used to calculate new buckling-load "knockdown factors". These knockdown factors are shown to be substantially improved, and hence much less conservative than the corresponding deterministic knockdown factors that are presently used by industry. The probability integral associated with the analysis is evaluated by using two methods; that is, by using the exact Monte Carlo method and by using an approximate First-Order Second- Moment method. A comparison of the results from these two methods indicates that the First-Order Second-Moment method yields results that are conservative for the shells considered. Furthermore, the results show that the improved, reliability-based knockdown factor presented always yields a safe estimate of the buckling load for the shells examined.
Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources.
Gao, Xiang; Acar, Levent
2016-07-04
This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors' data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented.
Numerical computation of solar neutrino flux attenuated by the MSW mechanism
NASA Astrophysics Data System (ADS)
Kim, Jai Sam; Chae, Yoon Sang; Kim, Jung Dae
1999-07-01
We compute the survival probability of an electron neutrino in its flight through the solar core experiencing the Mikheyev-Smirnov-Wolfenstein effect with all three neutrino species considered. We adopted a hybrid method that uses an accurate approximation formula in the non-resonance region and numerical integration in the non-adiabatic resonance region. The key of our algorithm is to use the importance sampling method for sampling the neutrino creation energy and position and to find the optimum radii to start and stop numerical integration. We further developed a parallel algorithm for a message passing parallel computer. By using an idea of job token, we have developed a dynamical load balancing mechanism which is effective under any irregular load distributions
Quantization of Non-Lagrangian Systems
NASA Astrophysics Data System (ADS)
Kochan, Denis
A novel method for quantization of non-Lagrangian (open) systems is proposed. It is argued that the essential object, which provides both classical and quantum evolution, is a certain canonical two-form defined in extended velocity space. In this setting classical dynamics is recovered from the stringy-type variational principle, which employs umbilical surfaces instead of histories of the system. Quantization is then accomplished in accordance with the introduced variational principle. The path integral for the transition probability amplitude (propagator) is rearranged to a surface functional integral. In the standard case of closed (Lagrangian) systems the presented method reduces to the standard Feynman's approach. The inverse problem of the calculus of variation, the problem of quantization ambiguity and the quantum mechanics in the presence of friction are analyzed in detail.
A performability solution method for degradable nonrepairable systems
NASA Technical Reports Server (NTRS)
Furchtgott, D. G.; Meyer, J. F.
1984-01-01
The present performability model-solving algorithm identifies performance with 'reward', representing the state behavior of a system S by a finite-state stochastic process and determining reward by means of reward rates that are associated with the states of the base model. A general method is obtained for determining the probability distribution function of the performance (reward) variable, and therefore the performability, of the corresponding system. This is done for bounded utilization periods, and the result is an integral expression which is either analytically or numerically solvable.
Integrated-Circuit Pseudorandom-Number Generator
NASA Technical Reports Server (NTRS)
Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur
1992-01-01
Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.
Limitations of backward integration method for asteroid family age estimation
NASA Astrophysics Data System (ADS)
Radović, Viktor
2017-10-01
Determining the age of an asteroid family is important as it gives us a better understanding of the dynamics, formation and collisional evolution of a family. So far, a few methods for determining the age of a family have been developed. The most accurate one is probably the backward integration method (BIM) that works very well for young families. In this paper, we try to study its characteristics and limitations in more detail using a fictional asteroid family. The analysis is performed with two numerical packages: orbfit and mercury. We studied the clustering of the secular angles Ω and ϖ and obtained linear relationship between the depth of the clustering and the age of the family. Our results suggest that the BIM could be successfully applied only to families not older than 18 Myr.
Dose-volume histogram prediction using density estimation.
Skarpman Munter, Johanna; Sjölund, Jens
2015-09-07
Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.
Some New Twists to Problems Involving the Gaussian Probability Integral
NASA Technical Reports Server (NTRS)
Simon, Marvin K.; Divsalar, Dariush
1997-01-01
Using an alternate form of the Gaussian probability integral discovered a number of years ago, it is shown that the solution to a number of previously considered communication problems can be simplified and in some cases made more accurate(i.e., exact rather than bounded).
Neural Encoding and Integration of Learned Probabilistic Sequences in Avian Sensory-Motor Circuitry
Brainard, Michael S.
2013-01-01
Many complex behaviors, such as human speech and birdsong, reflect a set of categorical actions that can be flexibly organized into variable sequences. However, little is known about how the brain encodes the probabilities of such sequences. Behavioral sequences are typically characterized by the probability of transitioning from a given action to any subsequent action (which we term “divergence probability”). In contrast, we hypothesized that neural circuits might encode the probability of transitioning to a given action from any preceding action (which we term “convergence probability”). The convergence probability of repeatedly experienced sequences could naturally become encoded by Hebbian plasticity operating on the patterns of neural activity associated with those sequences. To determine whether convergence probability is encoded in the nervous system, we investigated how auditory-motor neurons in vocal premotor nucleus HVC of songbirds encode different probabilistic characterizations of produced syllable sequences. We recorded responses to auditory playback of pseudorandomly sequenced syllables from the bird's repertoire, and found that variations in responses to a given syllable could be explained by a positive linear dependence on the convergence probability of preceding sequences. Furthermore, convergence probability accounted for more response variation than other probabilistic characterizations, including divergence probability. Finally, we found that responses integrated over >7–10 syllables (∼700–1000 ms) with the sign, gain, and temporal extent of integration depending on convergence probability. Our results demonstrate that convergence probability is encoded in sensory-motor circuitry of the song-system, and suggest that encoding of convergence probability is a general feature of sensory-motor circuits. PMID:24198363
NASA Astrophysics Data System (ADS)
Barengoltz, Jack
2016-07-01
Monte Carlo (MC) is a common method to estimate probability, effectively by a simulation. For planetary protection, it may be used to estimate the probability of impact P{}_{I} by a launch vehicle (upper stage) of a protected planet. The object of the analysis is to provide a value for P{}_{I} with a given level of confidence (LOC) that the true value does not exceed the maximum allowed value of P{}_{I}. In order to determine the number of MC histories required, one must also guess the maximum number of hits that will occur in the analysis. This extra parameter is needed because a LOC is desired. If more hits occur, the MC analysis would indicate that the true value may exceed the specification value with a higher probability than the LOC. (In the worst case, even the mean value of the estimated P{}_{I} might exceed the specification value.) After the analysis is conducted, the actual number of hits is, of course, the mean. The number of hits arises from a small probability per history and a large number of histories; these are the classic requirements for a Poisson distribution. For a known Poisson distribution (the mean is the only parameter), the probability for some interval in the number of hits is calculable. Before the analysis, this is not possible. Fortunately, there are methods that can bound the unknown mean for a Poisson distribution. F. Garwoodfootnote{ F. Garwood (1936), ``Fiduciary limits for the Poisson distribution.'' Biometrika 28, 437-442.} published an appropriate method that uses the Chi-squared function, actually its inversefootnote{ The integral chi-squared function would yield probability α as a function of the mean µ and an actual value n.} (despite the notation used): This formula for the upper and lower limits of the mean μ with the two-tailed probability 1-α depends on the LOC α and an estimated value of the number of "successes" n. In a MC analysis for planetary protection, only the upper limit is of interest, i.e., the single-tailed distribution. (Smaller actual P{}_{I }is no problem.) {}_{ } One advantage of this method is that this function is available in EXCEL. Note that care must be taken with the definition of the CHIINV function (the inverse of the integral chi-squared distribution). The equivalent inequality in EXCEL is μ < CHIINV[1-α, 2(n+1)] In practice, one calculates this upper limit for a specified LOC, α , and a guess of how many hits n will be found after the MC analysis. Then the estimate of the number of histories required is this upper limit divided by the specification for the allowed P{}_{I} (rounded up). However, if the number of hits actually exceeds the guess, the P{}_{I} requirement will be met only with a smaller LOC. A disadvantage is that the intervals about the mean are "in general too wide, yielding coverage probabilities much greater than 1- α ." footnote{ G. Casella and C. Robert (1988), Purdue University-Technical Report #88-7 or Cornell University-Technical Report BU-903-M.} For planetary protection, this technical issue means that the upper limit of the interval and the probability associated with the interval (i.e., the LOC) are conservative.
Integrating count and detection–nondetection data to model population dynamics
Zipkin, Elise F.; Rossman, Sam; Yackulic, Charles B.; Wiens, David; Thorson, James T.; Davis, Raymond J.; Grant, Evan H. Campbell
2017-01-01
There is increasing need for methods that integrate multiple data types into a single analytical framework as the spatial and temporal scale of ecological research expands. Current work on this topic primarily focuses on combining capture–recapture data from marked individuals with other data types into integrated population models. Yet, studies of species distributions and trends often rely on data from unmarked individuals across broad scales where local abundance and environmental variables may vary. We present a modeling framework for integrating detection–nondetection and count data into a single analysis to estimate population dynamics, abundance, and individual detection probabilities during sampling. Our dynamic population model assumes that site-specific abundance can change over time according to survival of individuals and gains through reproduction and immigration. The observation process for each data type is modeled by assuming that every individual present at a site has an equal probability of being detected during sampling processes. We examine our modeling approach through a series of simulations illustrating the relative value of count vs. detection–nondetection data under a variety of parameter values and survey configurations. We also provide an empirical example of the model by combining long-term detection–nondetection data (1995–2014) with newly collected count data (2015–2016) from a growing population of Barred Owl (Strix varia) in the Pacific Northwest to examine the factors influencing population abundance over time. Our model provides a foundation for incorporating unmarked data within a single framework, even in cases where sampling processes yield different detection probabilities. This approach will be useful for survey design and to researchers interested in incorporating historical or citizen science data into analyses focused on understanding how demographic rates drive population abundance.
Integrating count and detection-nondetection data to model population dynamics.
Zipkin, Elise F; Rossman, Sam; Yackulic, Charles B; Wiens, J David; Thorson, James T; Davis, Raymond J; Grant, Evan H Campbell
2017-06-01
There is increasing need for methods that integrate multiple data types into a single analytical framework as the spatial and temporal scale of ecological research expands. Current work on this topic primarily focuses on combining capture-recapture data from marked individuals with other data types into integrated population models. Yet, studies of species distributions and trends often rely on data from unmarked individuals across broad scales where local abundance and environmental variables may vary. We present a modeling framework for integrating detection-nondetection and count data into a single analysis to estimate population dynamics, abundance, and individual detection probabilities during sampling. Our dynamic population model assumes that site-specific abundance can change over time according to survival of individuals and gains through reproduction and immigration. The observation process for each data type is modeled by assuming that every individual present at a site has an equal probability of being detected during sampling processes. We examine our modeling approach through a series of simulations illustrating the relative value of count vs. detection-nondetection data under a variety of parameter values and survey configurations. We also provide an empirical example of the model by combining long-term detection-nondetection data (1995-2014) with newly collected count data (2015-2016) from a growing population of Barred Owl (Strix varia) in the Pacific Northwest to examine the factors influencing population abundance over time. Our model provides a foundation for incorporating unmarked data within a single framework, even in cases where sampling processes yield different detection probabilities. This approach will be useful for survey design and to researchers interested in incorporating historical or citizen science data into analyses focused on understanding how demographic rates drive population abundance. © 2017 by the Ecological Society of America.
Development of a Nonlinear Probability of Collision Tool for the Earth Observing System
NASA Technical Reports Server (NTRS)
McKinley, David P.
2006-01-01
The Earth Observing System (EOS) spacecraft Terra, Aqua, and Aura fly in constellation with several other spacecraft in 705-kilometer mean altitude sun-synchronous orbits. All three spacecraft are operated by the Earth Science Mission Operations (ESMO) Project at Goddard Space Flight Center (GSFC). In 2004, the ESMO project began assessing the probability of collision of the EOS spacecraft with other space objects. In addition to conjunctions with high relative velocities, the collision assessment method for the EOS spacecraft must address conjunctions with low relative velocities during potential collisions between constellation members. Probability of Collision algorithms that are based on assumptions of high relative velocities and linear relative trajectories are not suitable for these situations; therefore an algorithm for handling the nonlinear relative trajectories was developed. This paper describes this algorithm and presents results from its validation for operational use. The probability of collision is typically calculated by integrating a Gaussian probability distribution over the volume swept out by a sphere representing the size of the space objects involved in the conjunction. This sphere is defined as the Hard Body Radius. With the assumption of linear relative trajectories, this volume is a cylinder, which translates into simple limits of integration for the probability calculation. For the case of nonlinear relative trajectories, the volume becomes a complex geometry. However, with an appropriate choice of coordinate systems, the new algorithm breaks down the complex geometry into a series of simple cylinders that have simple limits of integration. This nonlinear algorithm will be discussed in detail in the paper. The nonlinear Probability of Collision algorithm was first verified by showing that, when used in high relative velocity cases, it yields similar answers to existing high relative velocity linear relative trajectory algorithms. The comparison with the existing high velocity/linear theory will also be used to determine at what relative velocity the analysis should use the new nonlinear theory in place of the existing linear theory. The nonlinear algorithm was also compared to a known exact solution for the probability of collision between two objects when the relative motion is strictly circular and the error covariance is spherically symmetric. Figure I shows preliminary results from this comparison by plotting the probabilities calculated from the new algorithm and those from the exact solution versus the Hard Body Radius to Covariance ratio. These results show about 5% error when the Hard Body Radius is equal to one half the spherical covariance magnitude. The algorithm was then combined with a high fidelity orbit state and error covariance propagator into a useful tool for analyzing low relative velocity nonlinear relative trajectories. The high fidelity propagator is capable of using atmospheric drag, central body gravitational, solar radiation, and third body forces to provide accurate prediction of the relative trajectories and covariance evolution. The covariance propagator also includes a process noise model to ensure realistic evolutions of the error covariance. This paper will describe the integration of the nonlinear probability algorithm and the propagators into a useful collision assessment tool. Finally, a hypothetical case study involving a low relative velocity conjunction between members of the Earth Observation System constellation will be presented.
NASA Astrophysics Data System (ADS)
Nelson, Adam
Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.
Development and application of methods used to source prehistoric Southwestern maize: a review
Benson, Larry V.
2012-01-01
Archaeological cobs free of mineral contaminants should be used to source the soils in which they were grown. Mineral contaminants often contain much higher concentrations of metals than vegetal materials and can alter a cob’s apparent metal and heavy-isotope content. Cleaning a cob via immersion in an acid solution for more than a few minutes will result in the incongruent and sometimes complete leaching of metals, including strontium (Sr), from the cob. When using 87Sr/Sr to determine the location of potential agriculture fields, it is best to either integrate several depth-integrated soil samples or to integrate several vegetation samples from individual fields. Biologically labile Sr in semi-arid Southwestern soils largely originates from eolian source or sources and usually is not derived from underlying bedrock. Existing Sr-isotope data indicate that archaeological cobs from Aztec Ruins came from either the Mesa Verde-McElmo Dome or Totah areas, that Pueblo Bonito and Chetro Ketl cobs, from Chaco Canyon that predate A.D. 1130, probably came from the Rio Chaco corridor, and that cobs from Chaco Canyon, that postdate A.D. 1130, probably came from either the Totah or Zuni areas.
Murphy, S.; Scala, A.; Herrero, A.; Lorito, S.; Festa, G.; Trasatti, E.; Tonini, R.; Romano, F.; Molinari, I.; Nielsen, S.
2016-01-01
The 2011 Tohoku earthquake produced an unexpected large amount of shallow slip greatly contributing to the ensuing tsunami. How frequent are such events? How can they be efficiently modelled for tsunami hazard? Stochastic slip models, which can be computed rapidly, are used to explore the natural slip variability; however, they generally do not deal specifically with shallow slip features. We study the systematic depth-dependence of slip along a thrust fault with a number of 2D dynamic simulations using stochastic shear stress distributions and a geometry based on the cross section of the Tohoku fault. We obtain a probability density for the slip distribution, which varies both with depth, earthquake size and whether the rupture breaks the surface. We propose a method to modify stochastic slip distributions according to this dynamically-derived probability distribution. This method may be efficiently applied to produce large numbers of heterogeneous slip distributions for probabilistic tsunami hazard analysis. Using numerous M9 earthquake scenarios, we demonstrate that incorporating the dynamically-derived probability distribution does enhance the conditional probability of exceedance of maximum estimated tsunami wave heights along the Japanese coast. This technique for integrating dynamic features in stochastic models can be extended to any subduction zone and faulting style. PMID:27725733
Two-machine flow shop scheduling integrated with preventive maintenance planning
NASA Astrophysics Data System (ADS)
Wang, Shijin; Liu, Ming
2016-02-01
This paper investigates an integrated optimisation problem of production scheduling and preventive maintenance (PM) in a two-machine flow shop with time to failure of each machine subject to a Weibull probability distribution. The objective is to find the optimal job sequence and the optimal PM decisions before each job such that the expected makespan is minimised. To investigate the value of integrated scheduling solution, computational experiments on small-scale problems with different configurations are conducted with total enumeration method, and the results are compared with those of scheduling without maintenance but with machine degradation, and individual job scheduling combined with independent PM planning. Then, for large-scale problems, four genetic algorithm (GA) based heuristics are proposed. The numerical results with several large problem sizes and different configurations indicate the potential benefits of integrated scheduling solution and the results also show that proposed GA-based heuristics are efficient for the integrated problem.
Reducing Stiffness and Electrical Losses of High Channel Hybrid Nerve Cuff Electrodes
2001-10-25
Electrodes were developed. These electrodes consisted of a micromachined polyimide -based thin-film structure with integrated electrode contacts and...electrodes, mechanical properties were enhanced by changing the method of joining silicone and polyimide from using one part silicone adhesive to...gold, platinum, platinum black, polyimide , silicone, polymer bonding I. INTRODUCTION Cuff-type electrodes are probably the most commonly used neural
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, J; Fan, J; Hu, W
Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less
Reinforcement Probability Modulates Temporal Memory Selection and Integration Processes
Matell, Matthew S.; Kurti, Allison N.
2013-01-01
We have previously shown that rats trained in a mixed-interval peak procedure (tone = 4s, light = 12s) respond in a scalar manner at a time in between the trained peak times when presented with the stimulus compound (Swanton & Matell, 2011). In our previous work, the two component cues were reinforced with different probabilities (short = 20%, long = 80%) to equate response rates, and we found that the compound peak time was biased toward the cue with the higher reinforcement probability. Here, we examined the influence that different reinforcement probabilities have on the temporal location and shape of the compound response function. We found that the time of peak responding shifted as a function of the relative reinforcement probability of the component cues, becoming earlier as the relative likelihood of reinforcement associated with the short cue increased. However, as the relative probabilities of the component cues grew dissimilar, the compound peak became non-scalar, suggesting that the temporal control of behavior shifted from a process of integration to one of selection. As our previous work has utilized durations and reinforcement probabilities more discrepant than those used here, these data suggest that the processes underlying the integration/selection decision for time are based on cue value. PMID:23896560
DuRoss, Christopher B.; Personius, Stephen F.; Crone, Anthony J.; Olig, Susan S.; Lund, William R.
2011-01-01
We present a method to evaluate and integrate paleoseismic data from multiple sites into a single, objective measure of earthquake timing and recurrence on discrete segments of active faults. We apply this method to the Weber segment (WS) of the Wasatch fault zone using data from four fault-trench studies completed between 1981 and 2009. After systematically reevaluating the stratigraphic and chronologic data from each trench site, we constructed time-stratigraphic OxCal models that yield site probability density functions (PDFs) of the times of individual earthquakes. We next qualitatively correlated the site PDFs into a segment-wide earthquake chronology, which is supported by overlapping site PDFs, large per-event displacements, and prominent segment boundaries. For each segment-wide earthquake, we computed the product of the site PDF probabilities in common time bins, which emphasizes the overlap in the site earthquake times, and gives more weight to the narrowest, best-defined PDFs. The product method yields smaller earthquake-timing uncertainties compared to taking the mean of the site PDFs, but is best suited to earthquakes constrained by broad, overlapping site PDFs. We calculated segment-wide earthquake recurrence intervals and uncertainties using a Monte Carlo model. Five surface-faulting earthquakes occurred on the WS at about 5.9, 4.5, 3.1, 1.1, and 0.6 ka. With the exception of the 1.1-ka event, we used the product method to define the earthquake times. The revised WS chronology yields a mean recurrence interval of 1.3 kyr (0.7–1.9-kyr estimated two-sigma [2δ] range based on interevent recurrence). These data help clarify the paleoearthquake history of the WS, including the important question of the timing and rupture extent of the most recent earthquake, and are essential to the improvement of earthquake-probability assessments for the Wasatch Front region.
DuRoss, C.B.; Personius, S.F.; Crone, A.J.; Olig, S.S.; Lund, W.R.
2011-01-01
We present a method to evaluate and integrate paleoseismic data from multiple sites into a single, objective measure of earthquake timing and recurrence on discrete segments of active faults. We apply this method to the Weber segment (WS) of the Wasatch fault zone using data from four fault-trench studies completed between 1981 and 2009. After systematically reevaluating the stratigraphic and chronologic data from each trench site, we constructed time-stratigraphic OxCal models that yield site probability density functions (PDFs) of the times of individual earthquakes. We next qualitatively correlated the site PDFs into a segment-wide earthquake chronology, which is supported by overlapping site PDFs, large per-event displacements, and prominent segment boundaries. For each segment-wide earthquake, we computed the product of the site PDF probabilities in common time bins, which emphasizes the overlap in the site earthquake times, and gives more weight to the narrowest, best-defined PDFs. The product method yields smaller earthquake-timing uncertainties compared to taking the mean of the site PDFs, but is best suited to earthquakes constrained by broad, overlapping site PDFs. We calculated segment-wide earthquake recurrence intervals and uncertainties using a Monte Carlo model. Five surface-faulting earthquakes occurred on the WS at about 5.9, 4.5, 3.1, 1.1, and 0.6 ka. With the exception of the 1.1-ka event, we used the product method to define the earthquake times. The revised WS chronology yields a mean recurrence interval of 1.3 kyr (0.7-1.9-kyr estimated two-sigma [2??] range based on interevent recurrence). These data help clarify the paleoearthquake history of the WS, including the important question of the timing and rupture extent of the most recent earthquake, and are essential to the improvement of earthquake-probability assessments for the Wasatch Front region.
Quantum quenches in two spatial dimensions using chain array matrix product states
A. J. A. James; Konik, R.
2015-10-15
We describe a method for simulating the real time evolution of extended quantum systems in two dimensions (2D). The method combines the benefits of integrability and matrix product states in one dimension to avoid several issues that hinder other applications of tensor based methods in 2D. In particular, it can be extended to infinitely long cylinders. As an example application we present results for quantum quenches in the 2D quantum [(2+1)-dimensional] Ising model. As a result, in quenches that cross a phase boundary we find that the return probability shows nonanalyticities in time.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.
Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics
NASA Astrophysics Data System (ADS)
Abe, Sumiyoshi
2014-11-01
The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.
Statistical computation of tolerance limits
NASA Technical Reports Server (NTRS)
Wheeler, J. T.
1993-01-01
Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.
NASA Astrophysics Data System (ADS)
D'Isanto, A.; Polsterer, K. L.
2018-01-01
Context. The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. Aims: We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. Methods: A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). Results: We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. Conclusions: The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.
A Probabilistic, Facility-Centric Approach to Lightning Strike Location
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William p.; Merceret, Francis J.
2012-01-01
A new probabilistic facility-centric approach to lightning strike location has been developed. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collisionith spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.
Minimal entropy probability paths between genome families.
Ahlbrandt, Calvin; Benson, Gary; Casey, William
2004-05-01
We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.
Tackling higher derivative ghosts with the Euclidean path integral
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fontanini, Michele; Department of Physics, Syracuse University, Syracuse, New York 13244; Trodden, Mark
2011-05-15
An alternative to the effective field theory approach to treat ghosts in higher derivative theories is to attempt to integrate them out via the Euclidean path integral formalism. It has been suggested that this method could provide a consistent framework within which we might tolerate the ghost degrees of freedom that plague, among other theories, the higher derivative gravity models that have been proposed to explain cosmic acceleration. We consider the extension of this idea to treating a class of terms with order six derivatives, and find that for a general term the Euclidean path integral approach works in themore » most trivial background, Minkowski. Moreover we see that even in de Sitter background, despite some difficulties, it is possible to define a probability distribution for tensorial perturbations of the metric.« less
Total systems design analysis of high performance structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1993-01-01
Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.
NASA Astrophysics Data System (ADS)
Tiwari, Vaibhav
2018-07-01
The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.
A Bayesian model averaging method for improving SMT phrase table
NASA Astrophysics Data System (ADS)
Duan, Nan
2013-03-01
Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.
NASA Astrophysics Data System (ADS)
Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong
2014-06-01
Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.
Improving inferences from fisheries capture-recapture studies through remote detection of PIT tags
Hewitt, David A.; Janney, Eric C.; Hayes, Brian S.; Shively, Rip S.
2010-01-01
Models for capture-recapture data are commonly used in analyses of the dynamics of fish and wildlife populations, especially for estimating vital parameters such as survival. Capture-recapture methods provide more reliable inferences than other methods commonly used in fisheries studies. However, for rare or elusive fish species, parameter estimation is often hampered by small probabilities of re-encountering tagged fish when encounters are obtained through traditional sampling methods. We present a case study that demonstrates how remote antennas for passive integrated transponder (PIT) tags can increase encounter probabilities and the precision of survival estimates from capture-recapture models. Between 1999 and 2007, trammel nets were used to capture and tag over 8,400 endangered adult Lost River suckers (Deltistes luxatus) during the spawning season in Upper Klamath Lake, Oregon. Despite intensive sampling at relatively discrete spawning areas, encounter probabilities from Cormack-Jolly-Seber models were consistently low (< 0.2) and the precision of apparent annual survival estimates was poor. Beginning in 2005, remote PIT tag antennas were deployed at known spawning locations to increase the probability of re-encountering tagged fish. We compare results based only on physical recaptures with results based on both physical recaptures and remote detections to demonstrate the substantial improvement in estimates of encounter probabilities (approaching 100%) and apparent annual survival provided by the remote detections. The richer encounter histories provided robust inferences about the dynamics of annual survival and have made it possible to explore more realistic models and hypotheses about factors affecting the conservation and recovery of this endangered species. Recent advances in technology related to PIT tags have paved the way for creative implementation of large-scale tagging studies in systems where they were previously considered impracticable.
[Protection regionalization of Houshi Forest Park based on landscape sensitivity].
Zhou, Rui; Li, Yue-hui; Hu, Yuan-man; Zhang, Jia-hui; Liu, Miao
2009-03-01
By using GIS technology, and selecting slope, relative distance to viewpoints, relative distance to tourism roads, visual probability of viewpoints, and visual probability of tourism roads as the indices, the landscape sensitivity of Houshi Forest Park was assessed, and an integrated assessment model was established. The AHP method was utilized to determine the weights of the indices, and further, to identify the integrated sensitivity class of the areas in the Park. Four classes of integrated sensitivity area were divided. Class I had an area of 297.24 hm2, occupying 22.9% of the total area of the Park, which should be strictly protected to maintain natural landscape, and prohibited any exploitation or construction. Class II had an area of 359.72 hm2, accounting for 27.8% of the total. The hills in this area should be kept from destroying to protect vegetation and water, but the simple byway and stone path could be built. Class III had an area reached up to 495.80 hm2, occupying 38.3% of the total, which could be moderately exploited, and artificial landscape was advocated to beautify and set off natural landscape. Class IV had the smallest area (142.80 hm2) accounting for 11% of the total, which had the greatest potential of exploitation, being possible to build large-scale integrated tourism facilities and travelling roads.
New KF-PP-SVM classification method for EEG in brain-computer interfaces.
Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian
2014-01-01
Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.
Statistics of some atmospheric turbulence records relevant to aircraft response calculations
NASA Technical Reports Server (NTRS)
Mark, W. D.; Fischer, R. W.
1981-01-01
Methods for characterizing atmospheric turbulence are described. The methods illustrated include maximum likelihood estimation of the integral scale and intensity of records obeying the von Karman transverse power spectral form, constrained least-squares estimation of the parameters of a parametric representation of autocorrelation functions, estimation of the power spectra density of the instantaneous variance of a record with temporally fluctuating variance, and estimation of the probability density functions of various turbulence components. Descriptions of the computer programs used in the computations are given, and a full listing of these programs is included.
NASA gear research and its probable effect on rotorcraft transmission design
NASA Technical Reports Server (NTRS)
Zaretsky, E. V.; Townsend, D. P.; Coy, J. J.
1979-01-01
The NASA Lewis Research Center devised a comprehensive gear technology research program beginning in 1969, the results of which are being integrated into the NASA civilian Helicopter Transmission System Technology Program. Attention is given to the results of this gear research and those programs which are presently being undertaken. In addition, research programs studying pitting fatigue, gear steels and processing, life prediction methods, gear design and dynamics, elastohydrodynamic lubrication, lubrication methods and gear noise are presented. Finally, the impact of advanced gear research technology on rotorcraft transmission design is discussed.
Messier, Kyle P.; Akita, Yasuyuki; Serre, Marc L.
2012-01-01
Geographic Information Systems (GIS) based techniques are cost-effective and efficient methods used by state agencies and epidemiology researchers for estimating concentration and exposure. However, budget limitations have made statewide assessments of contamination difficult, especially in groundwater media. Many studies have implemented address geocoding, land use regression, and geostatistics independently, but this is the first to examine the benefits of integrating these GIS techniques to address the need of statewide exposure assessments. A novel framework for concentration exposure is introduced that integrates address geocoding, land use regression (LUR), below detect data modeling, and Bayesian Maximum Entropy (BME). A LUR model was developed for Tetrachloroethylene that accounts for point sources and flow direction. We then integrate the LUR model into the BME method as a mean trend while also modeling below detects data as a truncated Gaussian probability distribution function. We increase available PCE data 4.7 times from previously available databases through multistage geocoding. The LUR model shows significant influence of dry cleaners at short ranges. The integration of the LUR model as mean trend in BME results in a 7.5% decrease in cross validation mean square error compared to BME with a constant mean trend. PMID:22264162
Messier, Kyle P; Akita, Yasuyuki; Serre, Marc L
2012-03-06
Geographic information systems (GIS) based techniques are cost-effective and efficient methods used by state agencies and epidemiology researchers for estimating concentration and exposure. However, budget limitations have made statewide assessments of contamination difficult, especially in groundwater media. Many studies have implemented address geocoding, land use regression, and geostatistics independently, but this is the first to examine the benefits of integrating these GIS techniques to address the need of statewide exposure assessments. A novel framework for concentration exposure is introduced that integrates address geocoding, land use regression (LUR), below detect data modeling, and Bayesian Maximum Entropy (BME). A LUR model was developed for tetrachloroethylene that accounts for point sources and flow direction. We then integrate the LUR model into the BME method as a mean trend while also modeling below detects data as a truncated Gaussian probability distribution function. We increase available PCE data 4.7 times from previously available databases through multistage geocoding. The LUR model shows significant influence of dry cleaners at short ranges. The integration of the LUR model as mean trend in BME results in a 7.5% decrease in cross validation mean square error compared to BME with a constant mean trend.
2D dark-count-rate modeling of PureB single-photon avalanche diodes in a TCAD environment
NASA Astrophysics Data System (ADS)
Knežević, Tihomir; Nanver, Lis K.; Suligoj, Tomislav
2018-02-01
PureB silicon photodiodes have nm-shallow p+n junctions with which photons/electrons with penetration-depths of a few nanometer can be detected. PureB Single-Photon Avalanche Diodes (SPADs) were fabricated and analysed by 2D numerical modeling as an extension to TCAD software. The very shallow p+ -anode has high perimeter curvature that enhances the electric field. In SPADs, noise is quantified by the dark count rate (DCR) that is a measure for the number of false counts triggered by unwanted processes in the non-illuminated device. Just like for desired events, the probability a dark count increases with increasing electric field and the perimeter conditions are critical. In this work, the DCR was studied by two 2D methods of analysis: the "quasi-2D" (Q-2D) method where vertical 1D cross-sections were assumed for calculating the electron/hole avalanche-probabilities, and the "ionization-integral 2D" (II-2D) method where crosssections were placed where the maximum ionization-integrals were calculated. The Q-2D method gave satisfactory results in structures where the peripheral regions had a small contribution to the DCR, such as in devices with conventional deepjunction guard rings (GRs). Otherwise, the II-2D method proved to be much more precise. The results show that the DCR simulation methods are useful for optimizing the compromise between fill-factor and p-/n-doping profile design in SPAD devices. For the experimentally investigated PureB SPADs, excellent agreement of the measured and simulated DCR was achieved. This shows that although an implicit GR is attractively compact, the very shallow pn-junction gives a risk of having such a low breakdown voltage at the perimeter that the DCR of the device may be negatively impacted.
Invariance of separability probability over reduced states in 4 × 4 bipartite systems
NASA Astrophysics Data System (ADS)
Lovas, Attila; Andai, Attila
2017-07-01
The geometric separability probability of composite quantum systems has been extensively studied in the recent decades. One of the simplest but strikingly difficult problem is to compute the separability probability of qubit-qubit and rebit-rebit quantum states with respect to the Hilbert-Schmidt measure. A lot of numerical simulations confirm the P{rebit - rebit}=\\frac{29}{64} and P{qubit-qubit}=\\frac{8}{33} conjectured probabilities. We provide a rigorous proof for the separability probability in the real case and we give explicit integral formulas for the complex and quaternionic case. Milz and Strunz studied the separability probability with respect to given subsystems. They conjectured that the separability probability of qubit-qubit (and qubit-qutrit) states of the form of ≤ft(\\begin{array}{@{}cc@{}} D1 & C \\ C* & D2 \\end{array}\\right) depends on D=D1+D2 (on single qubit subsystems), moreover it depends only on the Bloch radii (r) of D and it is constant in r. Using the Peres-Horodecki criterion for separability we give a mathematical proof for the \\frac{29}{64} probability and we present an integral formula for the complex case which hopefully will help to prove the \\frac{8}{33} probability, too. We prove Milz and Strunz’s conjecture for rebit-rebit and qubit-qubit states. The case, when the state space is endowed with the volume form generated by the operator monotone function f(x)=\\sqrt{x} is also studied in detail. We show that even in this setting Milz and Strunz’s conjecture holds true and we give an integral formula for separability probability according to this measure.
Sharp, Madeleine E.; Viswanathan, Jayalakshmi; Lanyon, Linda J.; Barton, Jason J. S.
2012-01-01
Background There are few clinical tools that assess decision-making under risk. Tests that characterize sensitivity and bias in decisions between prospects varying in magnitude and probability of gain may provide insights in conditions with anomalous reward-related behaviour. Objective We designed a simple test of how subjects integrate information about the magnitude and the probability of reward, which can determine discriminative thresholds and choice bias in decisions under risk. Design/Methods Twenty subjects were required to choose between two explicitly described prospects, one with higher probability but lower magnitude of reward than the other, with the difference in expected value between the two prospects varying from 3 to 23%. Results Subjects showed a mean threshold sensitivity of 43% difference in expected value. Regarding choice bias, there was a ‘risk premium’ of 38%, indicating a tendency to choose higher probability over higher reward. An analysis using prospect theory showed that this risk premium is the predicted outcome of hypothesized non-linearities in the subjective perception of reward value and probability. Conclusions This simple test provides a robust measure of discriminative value thresholds and biases in decisions under risk. Prospect theory can also make predictions about decisions when subjective perception of reward or probability is anomalous, as may occur in populations with dopaminergic or striatal dysfunction, such as Parkinson's disease and schizophrenia. PMID:22493669
A Guided Tour of Mathematical Methods for the Physical Sciences
NASA Astrophysics Data System (ADS)
Snieder, Roel; van Wijk, Kasper
2015-05-01
1. Introduction; 2. Dimensional analysis; 3. Power series; 4. Spherical and cylindrical coordinates; 5. Gradient; 6. Divergence of a vector field; 7. Curl of a vector field; 8. Theorem of Gauss; 9. Theorem of Stokes; 10. The Laplacian; 11. Scale analysis; 12. Linear algebra; 13. Dirac delta function; 14. Fourier analysis; 15. Analytic functions; 16. Complex integration; 17. Green's functions: principles; 18. Green's functions: examples; 19. Normal modes; 20. Potential-field theory; 21. Probability and statistics; 22. Inverse problems; 23. Perturbation theory; 24. Asymptotic evaluation of integrals; 25. Conservation laws; 26. Cartesian tensors; 27. Variational calculus; 28. Epilogue on power and knowledge.
Orbital evolution of some Centaurs
NASA Astrophysics Data System (ADS)
Kovalenko, Nataliya; Babenko, Yuri; Churyumov, Klim
2002-11-01
In this work we investigated the dynamical evolution of Centaurs objects 2060 (Chiron), 5145 (Pholus), 7066 (Nessus), 8405 (Asbolus), 10199 (Chariklo), 10370 (Hylonome), and Scattered-Disk object 15874. We have carried out orbital integration of test particles with initial orbits similar to those of these objects. Calculations were produced for +/-600kyr-10Myr starting at epoch and using the implicit single sequence Everhart methods. 12 variational orbits for each of selected Centaurs also have been numerically integrated for +/-200 kyr toward the past and the future. The most probable paths were traced up to +/-1 Myr. The character of orbital elements changes and peculiarities of close approaches to giant planets are discussed.
Real-time realizations of the Bayesian Infrasonic Source Localization Method
NASA Astrophysics Data System (ADS)
Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.
2015-12-01
The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.
NASA Astrophysics Data System (ADS)
Narukawa, Takafumi; Yamaguchi, Akira; Jang, Sunghyon; Amaya, Masaki
2018-02-01
For estimating fracture probability of fuel cladding tube under loss-of-coolant accident conditions of light-water-reactors, laboratory-scale integral thermal shock tests were conducted on non-irradiated Zircaloy-4 cladding tube specimens. Then, the obtained binary data with respect to fracture or non-fracture of the cladding tube specimen were analyzed statistically. A method to obtain the fracture probability curve as a function of equivalent cladding reacted (ECR) was proposed using Bayesian inference for generalized linear models: probit, logit, and log-probit models. Then, model selection was performed in terms of physical characteristics and information criteria, a widely applicable information criterion and a widely applicable Bayesian information criterion. As a result, it was clarified that the log-probit model was the best among the three models to estimate the fracture probability in terms of the degree of prediction accuracy for both next data to be obtained and the true model. Using the log-probit model, it was shown that 20% ECR corresponded to a 5% probability level with a 95% confidence of fracture of the cladding tube specimens.
The iso-response method: measuring neuronal stimulus integration with closed-loop experiments
Gollisch, Tim; Herz, Andreas V. M.
2012-01-01
Throughout the nervous system, neurons integrate high-dimensional input streams and transform them into an output of their own. This integration of incoming signals involves filtering processes and complex non-linear operations. The shapes of these filters and non-linearities determine the computational features of single neurons and their functional roles within larger networks. A detailed characterization of signal integration is thus a central ingredient to understanding information processing in neural circuits. Conventional methods for measuring single-neuron response properties, such as reverse correlation, however, are often limited by the implicit assumption that stimulus integration occurs in a linear fashion. Here, we review a conceptual and experimental alternative that is based on exploring the space of those sensory stimuli that result in the same neural output. As demonstrated by recent results in the auditory and visual system, such iso-response stimuli can be used to identify the non-linearities relevant for stimulus integration, disentangle consecutive neural processing steps, and determine their characteristics with unprecedented precision. Automated closed-loop experiments are crucial for this advance, allowing rapid search strategies for identifying iso-response stimuli during experiments. Prime targets for the method are feed-forward neural signaling chains in sensory systems, but the method has also been successfully applied to feedback systems. Depending on the specific question, “iso-response” may refer to a predefined firing rate, single-spike probability, first-spike latency, or other output measures. Examples from different studies show that substantial progress in understanding neural dynamics and coding can be achieved once rapid online data analysis and stimulus generation, adaptive sampling, and computational modeling are tightly integrated into experiments. PMID:23267315
NASA Astrophysics Data System (ADS)
Kalantari, Z.
2015-12-01
In Sweden, spatially explicit approaches have been applied in various disciplines such as landslide modelling based on soil type data and flood risk modelling for large rivers. Regarding flood mapping, most previous studies have focused on complex hydrological modelling on a small scale whereas just a few studies have used a robust GIS-based approach integrating most physical catchment descriptor (PCD) aspects on a larger scale. This study was built on a conceptual framework for looking at SedInConnect model, topography, land use, soil data and other PCDs and climate change in an integrated way to pave the way for more integrated policy making. The aim of the present study was to develop methodology for predicting the spatial probability of flooding on a general large scale. This framework can provide a region with an effective tool to inform a broad range of watershed planning activities within a region. Regional planners, decision-makers, etc. can utilize this tool to identify the most vulnerable points in a watershed and along roads to plan for interventions and actions to alter impacts of high flows and other extreme weather events on roads construction. The application of the model over a large scale can give a realistic spatial characterization of sediment connectivity for the optimal management of debris flow to road structures. The ability of the model to capture flooding probability was determined for different watersheds in central Sweden. Using data from this initial investigation, a method to subtract spatial data for multiple catchments and to produce soft data for statistical analysis was developed. It allowed flood probability to be predicted from spatially sparse data without compromising the significant hydrological features on the landscape. This in turn allowed objective quantification of the probability of floods at the field scale for future model development and watershed management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw
2014-03-14
The complex quantum Hamilton-Jacobi equation-Bohmian trajectories (CQHJE-BT) method is introduced as a synthetic trajectory method for integrating the complex quantum Hamilton-Jacobi equation for the complex action function by propagating an ensemble of real-valued correlated Bohmian trajectories. Substituting the wave function expressed in exponential form in terms of the complex action into the time-dependent Schrödinger equation yields the complex quantum Hamilton-Jacobi equation. We transform this equation into the arbitrary Lagrangian-Eulerian version with the grid velocity matching the flow velocity of the probability fluid. The resulting equation describing the rate of change in the complex action transported along Bohmian trajectories is simultaneouslymore » integrated with the guidance equation for Bohmian trajectories, and the time-dependent wave function is readily synthesized. The spatial derivatives of the complex action required for the integration scheme are obtained by solving one moving least squares matrix equation. In addition, the method is applied to the photodissociation of NOCl. The photodissociation dynamics of NOCl can be accurately described by propagating a small ensemble of trajectories. This study demonstrates that the CQHJE-BT method combines the considerable advantages of both the real and the complex quantum trajectory methods previously developed for wave packet dynamics.« less
Stan : A Probabilistic Programming Language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
Stan : A Probabilistic Programming Language
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...
2017-01-01
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
The present state and future directions of PDF methods
NASA Technical Reports Server (NTRS)
Pope, S. B.
1992-01-01
The objectives of the workshop are presented in viewgraph format, as is this entire article. The objectives are to discuss the present status and the future direction of various levels of engineering turbulence modeling related to Computational Fluid Dynamics (CFD) computations for propulsion; to assure that combustion is an essential part of propulsion; and to discuss Probability Density Function (PDF) methods for turbulent combustion. Essential to the integration of turbulent combustion models is the development of turbulent model, chemical kinetics, and numerical method. Some turbulent combustion models typically used in industry are the k-epsilon turbulent model, the equilibrium/mixing limited combustion, and the finite volume codes.
Zhao, Ruiying; Chen, Songchao; Zhou, Yue; Jin, Bin; Li, Yan
2018-01-01
Assessing heavy metal pollution and delineating pollution are the bases for evaluating pollution and determining a cost-effective remediation plan. Most existing studies are based on the spatial distribution of pollutants but ignore related uncertainty. In this study, eight heavy-metal concentrations (Cr, Pb, Cd, Hg, Zn, Cu, Ni, and Zn) were collected at 1040 sampling sites in a coastal industrial city in the Yangtze River Delta, China. The single pollution index (PI) and Nemerow integrated pollution index (NIPI) were calculated for every surface sample (0–20 cm) to assess the degree of heavy metal pollution. Ordinary kriging (OK) was used to map the spatial distribution of heavy metals content and NIPI. Then, we delineated composite heavy metal contamination based on the uncertainty produced by indicator kriging (IK). The results showed that mean values of all PIs and NIPIs were at safe levels. Heavy metals were most accumulated in the central portion of the study area. Based on IK, the spatial probability of composite heavy metal pollution was computed. The probability of composite contamination in the central core urban area was highest. A probability of 0.6 was found as the optimum probability threshold to delineate polluted areas from unpolluted areas for integrative heavy metal contamination. Results of pollution delineation based on uncertainty showed the proportion of false negative error areas was 6.34%, while the proportion of false positive error areas was 0.86%. The accuracy of the classification was 92.80%. This indicated the method we developed is a valuable tool for delineating heavy metal pollution. PMID:29642623
Hu, Bifeng; Zhao, Ruiying; Chen, Songchao; Zhou, Yue; Jin, Bin; Li, Yan; Shi, Zhou
2018-04-10
Assessing heavy metal pollution and delineating pollution are the bases for evaluating pollution and determining a cost-effective remediation plan. Most existing studies are based on the spatial distribution of pollutants but ignore related uncertainty. In this study, eight heavy-metal concentrations (Cr, Pb, Cd, Hg, Zn, Cu, Ni, and Zn) were collected at 1040 sampling sites in a coastal industrial city in the Yangtze River Delta, China. The single pollution index (PI) and Nemerow integrated pollution index (NIPI) were calculated for every surface sample (0-20 cm) to assess the degree of heavy metal pollution. Ordinary kriging (OK) was used to map the spatial distribution of heavy metals content and NIPI. Then, we delineated composite heavy metal contamination based on the uncertainty produced by indicator kriging (IK). The results showed that mean values of all PIs and NIPIs were at safe levels. Heavy metals were most accumulated in the central portion of the study area. Based on IK, the spatial probability of composite heavy metal pollution was computed. The probability of composite contamination in the central core urban area was highest. A probability of 0.6 was found as the optimum probability threshold to delineate polluted areas from unpolluted areas for integrative heavy metal contamination. Results of pollution delineation based on uncertainty showed the proportion of false negative error areas was 6.34%, while the proportion of false positive error areas was 0.86%. The accuracy of the classification was 92.80%. This indicated the method we developed is a valuable tool for delineating heavy metal pollution.
Li, Yan
2017-05-25
The efficiency evaluation model of integrated energy system, involving many influencing factors, and the attribute values are heterogeneous and non-deterministic, usually cannot give specific numerical or accurate probability distribution characteristics, making the final evaluation result deviation. According to the characteristics of the integrated energy system, a hybrid multi-attribute decision-making model is constructed. The evaluation model considers the decision maker's risk preference. In the evaluation of the efficiency of the integrated energy system, the evaluation value of some evaluation indexes is linguistic value, or the evaluation value of the evaluation experts is not consistent. These reasons lead to ambiguity in the decision information, usually in the form of uncertain linguistic values and numerical interval values. In this paper, the risk preference of decision maker is considered when constructing the evaluation model. Interval-valued multiple-attribute decision-making method and fuzzy linguistic multiple-attribute decision-making model are proposed. Finally, the mathematical model of efficiency evaluation of integrated energy system is constructed.
Huygens-Fresnel principle: Analyzing consistency at the photon level
NASA Astrophysics Data System (ADS)
Santos, Elkin A.; Castro, Ferney; Torres, Rafael
2018-04-01
Typically the use of the Rayleigh-Sommerfeld diffraction formula as a photon propagator is widely accepted due to the abundant experimental evidence that suggests that it works. However, a direct link between the propagation of the electromagnetic field in classical optics and the propagation of photons where the square of the probability amplitude describes the transverse probability of the photon detection is still an issue to be clarified. We develop a mathematical formulation for the photon propagation using the formalism of electromagnetic field quantization and the path-integral method, whose main feature is its similarity with a fractional Fourier transform (FRFT). Here we show that because of the close relation existing between the FRFT and the Fresnel diffraction integral, this propagator can be written as a Fresnel diffraction, which brings forward a discussion of the fundamental character of it at the photon level compared to the Huygens-Fresnel principle. Finally, we carry out an experiment of photon counting by a rectangular slit supporting the result that the diffraction phenomenon in the Fresnel approximation behaves as the actual classical limit.
Convective Weather Forecast Quality Metrics for Air Traffic Management Decision-Making
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.; Gyarfas, Brett; Chan, William N.; Meyn, Larry A.
2006-01-01
Since numerical weather prediction models are unable to accurately forecast the severity and the location of the storm cells several hours into the future when compared with observation data, there has been a growing interest in probabilistic description of convective weather. The classical approach for generating uncertainty bounds consists of integrating the state equations and covariance propagation equations forward in time. This step is readily recognized as the process update step of the Kalman Filter algorithm. The second well known method, known as the Monte Carlo method, consists of generating output samples by driving the forecast algorithm with input samples selected from distributions. The statistical properties of the distributions of the output samples are then used for defining the uncertainty bounds of the output variables. This method is computationally expensive for a complex model compared to the covariance propagation method. The main advantage of the Monte Carlo method is that a complex non-linear model can be easily handled. Recently, a few different methods for probabilistic forecasting have appeared in the literature. A method for computing probability of convection in a region using forecast data is described in Ref. 5. Probability at a grid location is computed as the fraction of grid points, within a box of specified dimensions around the grid location, with forecast convection precipitation exceeding a specified threshold. The main limitation of this method is that the results are dependent on the chosen dimensions of the box. The examples presented Ref. 5 show that this process is equivalent to low-pass filtering of the forecast data with a finite support spatial filter. References 6 and 7 describe the technique for computing percentage coverage within a 92 x 92 square-kilometer box and assigning the value to the center 4 x 4 square-kilometer box. This technique is same as that described in Ref. 5. Characterizing the forecast, following the process described in Refs. 5 through 7, in terms of percentage coverage or confidence level is notionally sound compared to characterizing in terms of probabilities because the probability of the forecast being correct can only be determined using actual observations. References 5 through 7 only use the forecast data and not the observations. The method for computing the probability of detection, false alarm ratio and several forecast quality metrics (Skill Scores) using both the forecast and observation data are given in Ref. 2. This paper extends the statistical verification method in Ref. 2 to determine co-occurrence probabilities. The method consists of computing the probability that a severe weather cell (grid location) is detected in the observation data in the neighborhood of the severe weather cell in the forecast data. Probabilities of occurrence at the grid location and in its neighborhood with higher severity, and with lower severity in the observation data compared to that in the forecast data are examined. The method proposed in Refs. 5 through 7 is used for computing the probability that a certain number of cells in the neighborhood of severe weather cells in the forecast data are seen as severe weather cells in the observation data. Finally, the probability of existence of gaps in the observation data in the neighborhood of severe weather cells in forecast data is computed. Gaps are defined as openings between severe weather cells through which an aircraft can safely fly to its intended destination. The rest of the paper is organized as follows. Section II summarizes the statistical verification method described in Ref. 2. The extension of this method for computing the co-occurrence probabilities in discussed in Section HI. Numerical examples using NCWF forecast data and NCWD observation data are presented in Section III to elucidate the characteristics of the co-occurrence probabilities. This section also discusses the procedure for computing throbabilities that the severity of convection in the observation data will be higher or lower in the neighborhood of grid locations compared to that indicated at the grid locations in the forecast data. The probability of coverage of neighborhood grid cells is also described via examples in this section. Section IV discusses the gap detection algorithm and presents a numerical example to illustrate the method. The locations of the detected gaps in the observation data are used along with the locations of convective weather cells in the forecast data to determine the probability of existence of gaps in the neighborhood of these cells. Finally, the paper is concluded in Section V.
Tanadini, Lorenzo G; Schmidt, Benedikt R
2011-01-01
Monitoring is an integral part of species conservation. Monitoring programs must take imperfect detection of species into account in order to be reliable. Theory suggests that detection probability may be determined by population size but this relationship has not yet been assessed empirically. Population size is particularly important because it may induce heterogeneity in detection probability and thereby cause bias in estimates of biodiversity. We used a site occupancy model to analyse data from a volunteer-based amphibian monitoring program to assess how well different variables explain variation in detection probability. An index to population size best explained detection probabilities for four out of six species (to avoid circular reasoning, we used the count of individuals at a previous site visit as an index to current population size). The relationship between the population index and detection probability was positive. Commonly used weather variables best explained detection probabilities for two out of six species. Estimates of site occupancy probabilities differed depending on whether the population index was or was not used to model detection probability. The relationship between the population index and detectability has implications for the design of monitoring and species conservation. Most importantly, because many small populations are likely to be overlooked, monitoring programs should be designed in such a way that small populations are not overlooked. The results also imply that methods cannot be standardized in such a way that detection probabilities are constant. As we have shown here, one can easily account for variation in population size in the analysis of data from long-term monitoring programs by using counts of individuals from surveys at the same site in previous years. Accounting for variation in population size is important because it can affect the results of long-term monitoring programs and ultimately the conservation of imperiled species.
Selection for territory acquisition is modulated by social network structure in a wild songbird
Farine, D R; Sheldon, B C
2015-01-01
The social environment may be a key mediator of selection that operates on animals. In many cases, individuals may experience selection not only as a function of their phenotype, but also as a function of the interaction between their phenotype and the phenotypes of the conspecifics they associate with. For example, when animals settle after dispersal, individuals may benefit from arriving early, but, in many cases, these benefits will be affected by the arrival times of other individuals in their local environment. We integrated a recently described method for calculating assortativity on weighted networks, which is the correlation between an individual's phenotype and that of its associates, into an existing framework for measuring the magnitude of social selection operating on phenotypes. We applied this approach to large-scale data on social network structure and the timing of arrival into the breeding area over three years. We found that late-arriving individuals had a reduced probability of breeding. However, the probability of breeding was also influenced by individuals’ social networks. Associating with late-arriving conspecifics increased the probability of successfully acquiring a breeding territory. Hence, social selection could offset the effects of nonsocial selection. Given parallel theoretical developments of the importance of local network structure on population processes, and increasing data being collected on social networks in free-living populations, the integration of these concepts could yield significant insights into social evolution. PMID:25611344
Bayesian soft X-ray tomography using non-stationary Gaussian Processes
NASA Astrophysics Data System (ADS)
Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
Bayesian soft X-ray tomography using non-stationary Gaussian Processes.
Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
Integrated pest management and allocation of control efforts for vector-borne diseases
Ginsberg, H.S.
2001-01-01
Applications of various control methods were evaluated to determine how to integrate methods so as to minimize the number of human cases of vector-borne diseases. These diseases can be controlled by lowering the number of vector-human contacts (e.g., by pesticide applications or use of repellents), or by lowering the proportion of vectors infected with pathogens (e.g., by lowering or vaccinating reservoir host populations). Control methods should be combined in such a way as to most efficiently lower the probability of human encounter with an infected vector. Simulations using a simple probabilistic model of pathogen transmission suggest that the most efficient way to integrate different control methods is to combine methods that have the same effect (e.g., combine treatments that lower the vector population; or combine treatments that lower pathogen prevalence in vectors). Combining techniques that have different effects (e.g., a technique that lowers vector populations with a technique that lowers pathogen prevalence in vectors) will be less efficient than combining two techniques that both lower vector populations or combining two techniques that both lower pathogen prevalence, costs being the same. Costs of alternative control methods generally differ, so the efficiency of various combinations at lowering human contact with infected vectors should be estimated at available funding levels. Data should be collected from initial trials to improve the effects of subsequent interventions on the number of human cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sezen, Halil; Aldemir, Tunc; Denning, R.
Probabilistic risk assessment of nuclear power plants initially focused on events initiated by internal faults at the plant, rather than external hazards including earthquakes and flooding. Although the importance of external hazards risk analysis is now well recognized, the methods for analyzing low probability external hazards rely heavily on subjective judgment of specialists, often resulting in substantial conservatism. This research developed a framework to integrate the risk of seismic and flooding events using realistic structural models and simulation of response of nuclear structures. The results of four application case studies are presented.
Operational Implementation of a Pc Uncertainty Construct for Conjunction Assessment Risk Analysis
NASA Technical Reports Server (NTRS)
Newman, Lauri K.; Hejduk, Matthew D.; Johnson, Lauren C.
2016-01-01
Earlier this year the NASA Conjunction Assessment and Risk Analysis (CARA) project presented the theoretical and algorithmic aspects of a method to include the uncertainties in the calculation inputs when computing the probability of collision (Pc) between two space objects, principally uncertainties in the covariances and the hard-body radius. The output of this calculation approach is to produce rather than a single Pc value an entire probability density function that will represent the range of possible Pc values given the uncertainties in the inputs and bring CA risk analysis methodologies more in line with modern risk management theory. The present study provides results from the exercise of this method against an extended dataset of satellite conjunctions in order to determine the effect of its use on the evaluation of conjunction assessment (CA) event risk posture. The effects are found to be considerable: a good number of events are downgraded from or upgraded to a serious risk designation on the basis of consideration of the Pc uncertainty. The findings counsel the integration of the developed methods into NASA CA operations.
Design, Development and Testing of Web Services for Multi-Sensor Snow Cover Mapping
NASA Astrophysics Data System (ADS)
Kadlec, Jiri
This dissertation presents the design, development and validation of new data integration methods for mapping the extent of snow cover based on open access ground station measurements, remote sensing images, volunteer observer snow reports, and cross country ski track recordings from location-enabled mobile devices. The first step of the data integration procedure includes data discovery, data retrieval, and data quality control of snow observations at ground stations. The WaterML R package developed in this work enables hydrologists to retrieve and analyze data from multiple organizations that are listed in the Consortium of Universities for the Advancement of Hydrologic Sciences Inc (CUAHSI) Water Data Center catalog directly within the R statistical software environment. Using the WaterML R package is demonstrated by running an energy balance snowpack model in R with data inputs from CUAHSI, and by automating uploads of real time sensor observations to CUAHSI HydroServer. The second step of the procedure requires efficient access to multi-temporal remote sensing snow images. The Snow Inspector web application developed in this research enables the users to retrieve a time series of fractional snow cover from the Moderate Resolution Imaging Spectroradiometer (MODIS) for any point on Earth. The time series retrieval method is based on automated data extraction from tile images provided by a Web Map Tile Service (WMTS). The average required time for retrieving 100 days of data using this technique is 5.4 seconds, which is significantly faster than other methods that require the download of large satellite image files. The presented data extraction technique and space-time visualization user interface can be used as a model for working with other multi-temporal hydrologic or climate data WMTS services. The third, final step of the data integration procedure is generating continuous daily snow cover maps. A custom inverse distance weighting method has been developed to combine volunteer snow reports, cross-country ski track reports and station measurements to fill cloud gaps in the MODIS snow cover product. The method is demonstrated by producing a continuous daily time step snow presence probability map dataset for the Czech Republic region. The ability of the presented methodology to reconstruct MODIS snow cover under cloud is validated by simulating cloud cover datasets and comparing estimated snow cover to actual MODIS snow cover. The percent correctly classified indicator showed accuracy between 80 and 90% using this method. Using crowdsourcing data (volunteer snow reports and ski tracks) improves the map accuracy by 0.7--1.2%. The output snow probability map data sets are published online using web applications and web services. Keywords: crowdsourcing, image analysis, interpolation, MODIS, R statistical software, snow cover, snowpack probability, Tethys platform, time series, WaterML, web services, winter sports.
CIL, AYLIN PELIN; BANG, HEEJUNG; OKTAY, KUTLUK
2013-01-01
Objective To estimate age-specific probabilities of live-birth with oocyte cryopreservation in non-donor (ND) egg cycles. Design Individual patient data (IPD) meta-analysis. Setting Assisted reproduction centers. Patients Infertile patients undergoing ND mature oocyte cryopreservation. Interventions PubMed was searched for the clinical studies on oocyte cryopreservation from January 1996 through July 2011. Randomized and non-randomized studies that used ND frozen-thawed mature oocytes with pregnancy outcomes were included. Authors of eligible studies were contacted to obtain IPD. Main outcome measures Live-birth probabilities based on age, cryopreservation method, and the number of oocytes thawed, injected, or embryos transferred. Results Original data from 10 studies including 2265 cycles from 1805 patients were obtained. Live-birth success rates declined with age regardless of the freezing technique. Despite this age-induced compromise, live-births continued to occur as late as to the ages of 42 and 44 with slowly-frozen (SF) and vitrified (VF) oocytes, respectively. Estimated probabilities of live-birth for VF oocytes were higher than those for SF. Conclusions The live-birth probabilities we calculated would enable more accurate counseling and informed decision of infertile women who consider oocyte cryopreservation. Given the success probabilities, we suggest that policy-makers should consider oocyte freezing as an integral part of prevention and treatment of infertility. PMID:23706339
Sleep Disruption Medical Intervention Forecasting (SDMIF) Module for the Integrated Medical Model
NASA Technical Reports Server (NTRS)
Lewandowski, Beth; Brooker, John; Mallis, Melissa; Hursh, Steve; Caldwell, Lynn; Myers, Jerry
2011-01-01
The NASA Integrated Medical Model (IMM) assesses the risk, including likelihood and impact of occurrence, of all credible in-flight medical conditions. Fatigue due to sleep disruption is a condition that could lead to operational errors, potentially resulting in loss of mission or crew. Pharmacological consumables are mitigation strategies used to manage the risks associated with sleep deficits. The likelihood of medical intervention due to sleep disruption was estimated with a well validated sleep model and a Monte Carlo computer simulation in an effort to optimize the quantity of consumables. METHODS: The key components of the model are the mission parameter program, the calculation of sleep intensity and the diagnosis and decision module. The mission parameter program was used to create simulated daily sleep/wake schedules for an ISS increment. The hypothetical schedules included critical events such as dockings and extravehicular activities and included actual sleep time and sleep quality. The schedules were used as inputs to the Sleep, Activity, Fatigue and Task Effectiveness (SAFTE) Model (IBR Inc., Baltimore MD), which calculated sleep intensity. Sleep data from an ISS study was used to relate calculated sleep intensity to the probability of sleep medication use, using a generalized linear model for binomial regression. A human yes/no decision process using a binomial random number was also factored into sleep medication use probability. RESULTS: These probability calculations were repeated 5000 times resulting in an estimate of the most likely amount of sleep aids used during an ISS mission and a 95% confidence interval. CONCLUSIONS: These results were transferred to the parent IMM for further weighting and integration with other medical conditions, to help inform operational decisions. This model is a potential planning tool for ensuring adequate sleep during sleep disrupted periods of a mission.
Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression
NASA Astrophysics Data System (ADS)
Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli
2018-06-01
Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.
Mesh-Based Entry Vehicle and Explosive Debris Re-Contact Probability Modeling
NASA Technical Reports Server (NTRS)
McPherson, Mark A.; Mendeck, Gavin F.
2011-01-01
The risk to a crewed vehicle arising from potential re-contact with fragments from an explosive breakup of any jettisoned spacecraft segments during entry has long sought to be quantified. However, great difficulty lies in efficiently capturing the potential locations of each fragment and their collective threat to the vehicle. The method presented in this paper addresses this problem by using a stochastic approach that discretizes simulated debris pieces into volumetric cells, and then assesses strike probabilities accordingly. Combining spatial debris density and relative velocity between the debris and the entry vehicle, the strike probability can be calculated from the integral of the debris flux inside each cell over time. Using this technique it is possible to assess the risk to an entry vehicle along an entire trajectory as it separates from the jettisoned segment. By decoupling the fragment trajectories from that of the entry vehicle, multiple potential separation maneuvers can then be evaluated rapidly to provide an assessment of the best strategy to mitigate the re-contact risk.
A "Virtual Spin" on the Teaching of Probability
ERIC Educational Resources Information Center
Beck, Shari A.; Huse, Vanessa E.
2007-01-01
This article, which describes integrating virtual manipulatives with the teaching of probability at the elementary level, puts a "virtual spin" on the teaching of probability to provide more opportunities for students to experience successful learning. The traditional use of concrete manipulatives is enhanced with virtual coins and spinners from…
Probability Theory, Not the Very Guide of Life
ERIC Educational Resources Information Center
Juslin, Peter; Nilsson, Hakan; Winman, Anders
2009-01-01
Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive…
Laplace Transforms without Integration
ERIC Educational Resources Information Center
Robertson, Robert L.
2017-01-01
Calculating Laplace transforms from the definition often requires tedious integrations. This paper provides an integration-free technique for calculating Laplace transforms of many familiar functions. It also shows how the technique can be applied to probability theory.
Statistical Evaluation and Improvement of Methods for Combining Random and Harmonic Loads
NASA Technical Reports Server (NTRS)
Brown, A. M.; McGhee, D. S.
2003-01-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield/ultimate strength and high- cycle fatigue capability. This Technical Publication examines the cumulative distribution percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematica to calculate the combined value corresponding to any desired percentile is then presented along with a curve tit to this value. Another Excel macro that calculates the combination using Monte Carlo simulation is shown. Unlike the traditional techniques. these methods quantify the calculated load value with a consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Additionally, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can substantially lower the design loading without losing any of the identified structural reliability.
Statistical Comparison and Improvement of Methods for Combining Random and Harmonic Loads
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; McGhee, David S.
2004-01-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield ultimate strength and high cycle fatigue capability. This paper examines the cumulative distribution function (CDF) percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematics is then used to calculate the combined value corresponding to any desired percentile along with a curve fit to this value. Another Excel macro is used to calculate the combination using a Monte Carlo simulation. Unlike the traditional techniques, these methods quantify the calculated load value with a Consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Also, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can lower the design loading substantially without losing any of the identified structural reliability.
Persistence Probabilities of Two-Sided (Integrated) Sums of Correlated Stationary Gaussian Sequences
NASA Astrophysics Data System (ADS)
Aurzada, Frank; Buck, Micha
2018-02-01
We study the persistence probability for some two-sided, discrete-time Gaussian sequences that are discrete-time analogues of fractional Brownian motion and integrated fractional Brownian motion, respectively. Our results extend the corresponding ones in continuous time in Molchan (Commun Math Phys 205(1):97-111, 1999) and Molchan (J Stat Phys 167(6):1546-1554, 2017) to a wide class of discrete-time processes.
NASA Astrophysics Data System (ADS)
Strauch, R. L.; Istanbulluoglu, E.
2017-12-01
We develop a landslide hazard modeling approach that integrates a data-driven statistical model and a probabilistic process-based shallow landslide model for mapping probability of landslide initiation, transport, and deposition at regional scales. The empirical model integrates the influence of seven site attribute (SA) classes: elevation, slope, curvature, aspect, land use-land cover, lithology, and topographic wetness index, on over 1,600 observed landslides using a frequency ratio (FR) approach. A susceptibility index is calculated by adding FRs for each SA on a grid-cell basis. Using landslide observations we relate susceptibility index to an empirically-derived probability of landslide impact. This probability is combined with results from a physically-based model to produce an integrated probabilistic map. Slope was key in landslide initiation while deposition was linked to lithology and elevation. Vegetation transition from forest to alpine vegetation and barren land cover with lower root cohesion leads to higher frequency of initiation. Aspect effects are likely linked to differences in root cohesion and moisture controlled by solar insulation and snow. We demonstrate the model in the North Cascades of Washington, USA and identify locations of high and low probability of landslide impacts that can be used by land managers in their design, planning, and maintenance.
1993-03-10
template which runs a Romberg algorithm in the background to numerically integrate the BVN [12:257]. Appendix A als- lists the results from two other...for computing these values: a Taylor series expansion, the Romberg algorithm , and the CBN technique. Appendix A lists CEPpop. values for eleven...determining factor in this selection process. Of the 175 populations ex- amined in the experiment, the MathCAD version of the Romberg algorithm failed
Probabilistic modeling of the evolution of gene synteny within reconciled phylogenies
2015-01-01
Background Most models of genome evolution concern either genetic sequences, gene content or gene order. They sometimes integrate two of the three levels, but rarely the three of them. Probabilistic models of gene order evolution usually have to assume constant gene content or adopt a presence/absence coding of gene neighborhoods which is blind to complex events modifying gene content. Results We propose a probabilistic evolutionary model for gene neighborhoods, allowing genes to be inserted, duplicated or lost. It uses reconciled phylogenies, which integrate sequence and gene content evolution. We are then able to optimize parameters such as phylogeny branch lengths, or probabilistic laws depicting the diversity of susceptibility of syntenic regions to rearrangements. We reconstruct a structure for ancestral genomes by optimizing a likelihood, keeping track of all evolutionary events at the level of gene content and gene synteny. Ancestral syntenies are associated with a probability of presence. We implemented the model with the restriction that at most one gene duplication separates two gene speciations in reconciled gene trees. We reconstruct ancestral syntenies on a set of 12 drosophila genomes, and compare the evolutionary rates along the branches and along the sites. We compare with a parsimony method and find a significant number of results not supported by the posterior probability. The model is implemented in the Bio++ library. It thus benefits from and enriches the classical models and methods for molecular evolution. PMID:26452018
An integrated logit model for contamination event detection in water distribution systems.
Housh, Mashor; Ostfeld, Avi
2015-05-15
The problem of contamination event detection in water distribution systems has become one of the most challenging research topics in water distribution systems analysis. Current attempts for event detection utilize a variety of approaches including statistical, heuristics, machine learning, and optimization methods. Several existing event detection systems share a common feature in which alarms are obtained separately for each of the water quality indicators. Unifying those single alarms from different indicators is usually performed by means of simple heuristics. A salient feature of the current developed approach is using a statistically oriented model for discrete choice prediction which is estimated using the maximum likelihood method for integrating the single alarms. The discrete choice model is jointly calibrated with other components of the event detection system framework in a training data set using genetic algorithms. The fusing process of each indicator probabilities, which is left out of focus in many existing event detection system models, is confirmed to be a crucial part of the system which could be modelled by exploiting a discrete choice model for improving its performance. The developed methodology is tested on real water quality data, showing improved performances in decreasing the number of false positive alarms and in its ability to detect events with higher probabilities, compared to previous studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Combining conversation analysis and event sequencing to study health communication.
Pecanac, Kristen E
2018-06-01
Good communication is essential in patient-centered care. The purpose of this paper is to describe conversation analysis and event sequencing and explain how integrating these methods strengthened the analysis in a study of communication between clinicians and surrogate decision makers in an intensive care unit. Conversation analysis was first used to determine how clinicians introduced the need for decision-making regarding life-sustaining treatment and how surrogate decision makers responded. Event sequence analysis then was used to determine the transitional probability (probability of one event leading to another in the interaction) that a given type of clinician introduction would lead to surrogate resistance or alignment. Conversation analysis provides a detailed analysis of the interaction between participants in a conversation. When combined with a quantitative analysis of the patterns of communication in an interaction, these data add information on the communication strategies that produce positive outcomes. Researchers can apply this mixed-methods approach to identify beneficial conversational practices and design interventions to improve health communication. © 2018 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar; Goebel, Kai
2013-01-01
This paper investigates the use of the inverse first-order reliability method (inverse- FORM) to quantify the uncertainty in the remaining useful life (RUL) of aerospace components. The prediction of remaining useful life is an integral part of system health prognosis, and directly helps in online health monitoring and decision-making. However, the prediction of remaining useful life is affected by several sources of uncertainty, and therefore it is necessary to quantify the uncertainty in the remaining useful life prediction. While system parameter uncertainty and physical variability can be easily included in inverse-FORM, this paper extends the methodology to include: (1) future loading uncertainty, (2) process noise; and (3) uncertainty in the state estimate. The inverse-FORM method has been used in this paper to (1) quickly obtain probability bounds on the remaining useful life prediction; and (2) calculate the entire probability distribution of remaining useful life prediction, and the results are verified against Monte Carlo sampling. The proposed methodology is illustrated using a numerical example.
Simultaneous Multi-Scale Diffusion Estimation and Tractography Guided by Entropy Spectrum Pathways
Galinsky, Vitaly L.; Frank, Lawrence R.
2015-01-01
We have developed a method for the simultaneous estimation of local diffusion and the global fiber tracts based upon the information entropy flow that computes the maximum entropy trajectories between locations and depends upon the global structure of the multi-dimensional and multi-modal diffusion field. Computation of the entropy spectrum pathways requires only solving a simple eigenvector problem for the probability distribution for which efficient numerical routines exist, and a straight forward integration of the probability conservation through ray tracing of the convective modes guided by a global structure of the entropy spectrum coupled with a small scale local diffusion. The intervoxel diffusion is sampled by multi b-shell multi q-angle DWI data expanded in spherical waves. This novel approach to fiber tracking incorporates global information about multiple fiber crossings in every individual voxel and ranks it in the most scientifically rigorous way. This method has potential significance for a wide range of applications, including studies of brain connectivity. PMID:25532167
Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan
2013-01-01
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
Analysis of noise-induced temporal correlations in neuronal spike sequences
NASA Astrophysics Data System (ADS)
Reinoso, José A.; Torrent, M. C.; Masoller, Cristina
2016-11-01
We investigate temporal correlations in sequences of noise-induced neuronal spikes, using a symbolic method of time-series analysis. We focus on the sequence of time-intervals between consecutive spikes (inter-spike-intervals, ISIs). The analysis method, known as ordinal analysis, transforms the ISI sequence into a sequence of ordinal patterns (OPs), which are defined in terms of the relative ordering of consecutive ISIs. The ISI sequences are obtained from extensive simulations of two neuron models (FitzHugh-Nagumo, FHN, and integrate-and-fire, IF), with correlated noise. We find that, as the noise strength increases, temporal order gradually emerges, revealed by the existence of more frequent ordinal patterns in the ISI sequence. While in the FHN model the most frequent OP depends on the noise strength, in the IF model it is independent of the noise strength. In both models, the correlation time of the noise affects the OP probabilities but does not modify the most probable pattern.
A Bayesian approach to microwave precipitation profile retrieval
NASA Technical Reports Server (NTRS)
Evans, K. Franklin; Turk, Joseph; Wong, Takmeng; Stephens, Graeme L.
1995-01-01
A multichannel passive microwave precipitation retrieval algorithm is developed. Bayes theorem is used to combine statistical information from numerical cloud models with forward radiative transfer modeling. A multivariate lognormal prior probability distribution contains the covariance information about hydrometeor distribution that resolves the nonuniqueness inherent in the inversion process. Hydrometeor profiles are retrieved by maximizing the posterior probability density for each vector of observations. The hydrometeor profile retrieval method is tested with data from the Advanced Microwave Precipitation Radiometer (10, 19, 37, and 85 GHz) of convection over ocean and land in Florida. The CP-2 multiparameter radar data are used to verify the retrieved profiles. The results show that the method can retrieve approximate hydrometeor profiles, with larger errors over land than water. There is considerably greater accuracy in the retrieval of integrated hydrometeor contents than of profiles. Many of the retrieval errors are traced to problems with the cloud model microphysical information, and future improvements to the algorithm are suggested.
Figures of Merit for Control Verification
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Goesu. Daniel P.
2008-01-01
This paper proposes a methodology for evaluating a controller's ability to satisfy a set of closed-loop specifications when the plant has an arbitrary functional dependency on uncertain parameters. Control verification metrics applicable to deterministic and probabilistic uncertainty models are proposed. These metrics, which result from sizing the largest uncertainty set of a given class for which the specifications are satisfied, enable systematic assessment of competing control alternatives regardless of the methods used to derive them. A particularly attractive feature of the tools derived is that their efficiency and accuracy do not depend on the robustness of the controller. This is in sharp contrast to Monte Carlo based methods where the number of simulations required to accurately approximate the failure probability grows exponentially with its closeness to zero. This framework allows for the integration of complex, high-fidelity simulations of the integrated system and only requires standard optimization algorithms for its implementation.
Bayesian tomography and integrated data analysis in fusion diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dong, E-mail: lid@swip.ac.cn; Dong, Y. B.; Deng, Wei
2016-11-15
In this article, a Bayesian tomography method using non-stationary Gaussian process for a prior has been introduced. The Bayesian formalism allows quantities which bear uncertainty to be expressed in the probabilistic form so that the uncertainty of a final solution can be fully resolved from the confidence interval of a posterior probability. Moreover, a consistency check of that solution can be performed by checking whether the misfits between predicted and measured data are reasonably within an assumed data error. In particular, the accuracy of reconstructions is significantly improved by using the non-stationary Gaussian process that can adapt to the varyingmore » smoothness of emission distribution. The implementation of this method to a soft X-ray diagnostics on HL-2A has been used to explore relevant physics in equilibrium and MHD instability modes. This project is carried out within a large size inference framework, aiming at an integrated analysis of heterogeneous diagnostics.« less
Probability distribution for the Gaussian curvature of the zero level surface of a random function
NASA Astrophysics Data System (ADS)
Hannay, J. H.
2018-04-01
A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z) = 0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f = 0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.
Probabilistic Aeroelastic Analysis Developed for Turbomachinery Components
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Mital, Subodh K.; Stefko, George L.; Pai, Shantaram S.
2003-01-01
Aeroelastic analyses for advanced turbomachines are being developed for use at the NASA Glenn Research Center and industry. However, these analyses at present are used for turbomachinery design with uncertainties accounted for by using safety factors. This approach may lead to overly conservative designs, thereby reducing the potential of designing higher efficiency engines. An integration of the deterministic aeroelastic analysis methods with probabilistic analysis methods offers the potential to design efficient engines with fewer aeroelastic problems and to make a quantum leap toward designing safe reliable engines. In this research, probabilistic analysis is integrated with aeroelastic analysis: (1) to determine the parameters that most affect the aeroelastic characteristics (forced response and stability) of a turbomachine component such as a fan, compressor, or turbine and (2) to give the acceptable standard deviation on the design parameters for an aeroelastically stable system. The approach taken is to combine the aeroelastic analysis of the MISER (MIStuned Engine Response) code with the FPI (fast probability integration) code. The role of MISER is to provide the functional relationships that tie the structural and aerodynamic parameters (the primitive variables) to the forced response amplitudes and stability eigenvalues (the response properties). The role of FPI is to perform probabilistic analyses by utilizing the response properties generated by MISER. The results are a probability density function for the response properties. The probabilistic sensitivities of the response variables to uncertainty in primitive variables are obtained as a byproduct of the FPI technique. The combined analysis of aeroelastic and probabilistic analysis is applied to a 12-bladed cascade vibrating in bending and torsion. Out of the total 11 design parameters, 6 are considered as having probabilistic variation. The six parameters are space-to-chord ratio (SBYC), stagger angle (GAMA), elastic axis (ELAXS), Mach number (MACH), mass ratio (MASSR), and frequency ratio (WHWB). The cascade is considered to be in subsonic flow with Mach 0.7. The results of the probabilistic aeroelastic analysis are the probability density function of predicted aerodynamic damping and frequency for flutter and the response amplitudes for forced response.
An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints
Rao, Yunqing; Qi, Dezhong; Li, Jinling
2013-01-01
For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm) is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem. PMID:24489491
An improved hierarchical genetic algorithm for sheet cutting scheduling with process constraints.
Rao, Yunqing; Qi, Dezhong; Li, Jinling
2013-01-01
For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony--hierarchical genetic algorithm) is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem.
Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1981-01-01
A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.
Sundh, Joakim; Juslin, Peter
2018-02-01
In this study, we explore how people integrate risks of assets in a simulated financial market into a judgment of the conjunctive risk that all assets decrease in value, both when assets are independent and when there is a systematic risk present affecting all assets. Simulations indicate that while mental calculation according to naïve application of probability theory is best when the assets are independent, additive or exemplar-based algorithms perform better when systematic risk is high. Considering that people tend to intuitively approach compound probability tasks using additive heuristics, we expected the participants to find it easiest to master tasks with high systematic risk - the most complex tasks from the standpoint of probability theory - while they should shift to probability theory or exemplar memory with independence between the assets. The results from 3 experiments confirm that participants shift between strategies depending on the task, starting off with the default of additive integration. In contrast to results in similar multiple cue judgment tasks, there is little evidence for use of exemplar memory. The additive heuristics also appear to be surprisingly context-sensitive, with limited generalization across formally very similar tasks. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Krumholz, Mark R.; Adamo, Angela; Fumagalli, Michele; Wofford, Aida; Calzetti, Daniela; Lee, Janice C.; Whitmore, Bradley C.; Bright, Stacey N.; Grasha, Kathryn; Gouliermis, Dimitrios A.; Kim, Hwihyun; Nair, Preethi; Ryon, Jenna E.; Smith, Linda J.; Thilker, David; Ubeda, Leonardo; Zackrisson, Erik
2015-10-01
We investigate a novel Bayesian analysis method, based on the Stochastically Lighting Up Galaxies (slug) code, to derive the masses, ages, and extinctions of star clusters from integrated light photometry. Unlike many analysis methods, slug correctly accounts for incomplete initial mass function (IMF) sampling, and returns full posterior probability distributions rather than simply probability maxima. We apply our technique to 621 visually confirmed clusters in two nearby galaxies, NGC 628 and NGC 7793, that are part of the Legacy Extragalactic UV Survey (LEGUS). LEGUS provides Hubble Space Telescope photometry in the NUV, U, B, V, and I bands. We analyze the sensitivity of the derived cluster properties to choices of prior probability distribution, evolutionary tracks, IMF, metallicity, treatment of nebular emission, and extinction curve. We find that slug's results for individual clusters are insensitive to most of these choices, but that the posterior probability distributions we derive are often quite broad, and sometimes multi-peaked and quite sensitive to the choice of priors. In contrast, the properties of the cluster population as a whole are relatively robust against all of these choices. We also compare our results from slug to those derived with a conventional non-stochastic fitting code, Yggdrasil. We show that slug's stochastic models are generally a better fit to the observations than the deterministic ones used by Yggdrasil. However, the overall properties of the cluster populations recovered by both codes are qualitatively similar.
Cue-based assertion classification for Swedish clinical text – developing a lexicon for pyConTextSwe
Velupillai, Sumithra; Skeppstedt, Maria; Kvist, Maria; Mowery, Danielle; Chapman, Brian E.; Dalianis, Hercules; Chapman, Wendy W.
2014-01-01
Objective The ability of a cue-based system to accurately assert whether a disorder is affirmed, negated, or uncertain is dependent, in part, on its cue lexicon. In this paper, we continue our study of porting an assertion system (pyConTextNLP) from English to Swedish (pyConTextSwe) by creating an optimized assertion lexicon for clinical Swedish. Methods and material We integrated cues from four external lexicons, along with generated inflections and combinations. We used subsets of a clinical corpus in Swedish. We applied four assertion classes (definite existence, probable existence, probable negated existence and definite negated existence) and two binary classes (existence yes/no and uncertainty yes/no) to pyConTextSwe. We compared pyConTextSwe’s performance with and without the added cues on a development set, and improved the lexicon further after an error analysis. On a separate evaluation set, we calculated the system’s final performance. Results Following integration steps, we added 454 cues to pyConTextSwe. The optimized lexicon developed after an error analysis resulted in statistically significant improvements on the development set (83% F-score, overall). The system’s final F-scores on an evaluation set were 81% (overall). For the individual assertion classes, F-score results were 88% (definite existence), 81% (probable existence), 55% (probable negated existence), and 63% (definite negated existence). For the binary classifications existence yes/no and uncertainty yes/no, final system performance was 97%/87% and 78%/86% F-score, respectively. Conclusions We have successfully ported pyConTextNLP to Swedish (pyConTextSwe). We have created an extensive and useful assertion lexicon for Swedish clinical text, which could form a valuable resource for similar studies, and which is publicly available. PMID:24556644
Assessing the Chances of Success: Naive Statistics versus Kind Experience
ERIC Educational Resources Information Center
Hogarth, Robin M.; Mukherjee, Kanchan; Soyer, Emre
2013-01-01
Additive integration of information is ubiquitous in judgment and has been shown to be effective even when multiplicative rules of probability theory are prescribed. We explore the generality of these findings in the context of estimating probabilities of success in contests. We first define a normative model of these probabilities that takes…
Clinical evaluation incorporating a personal genome
Ashley, Euan A.; Butte, Atul J.; Wheeler, Matthew T.; Chen, Rong; Klein, Teri E.; Dewey, Frederick E.; Dudley, Joel T.; Ormond, Kelly E.; Pavlovic, Aleksandra; Hudgins, Louanne; Gong, Li; Hodges, Laura M.; Berlin, Dorit S.; Thorn, Caroline F.; Sangkuhl, Katrin; Hebert, Joan M.; Woon, Mark; Sagreiya, Hersh; Whaley, Ryan; Morgan, Alexander A.; Pushkarev, Dmitry; Neff, Norma F; Knowles, Joshua W.; Chou, Mike; Thakuria, Joseph; Rosenbaum, Abraham; Zaranek, Alexander Wait; Church, George; Greely, Henry T.; Quake, Stephen R.; Altman, Russ B.
2010-01-01
Background The cost of genomic information has fallen steeply but the path to clinical translation of risk estimates for common variants found in genome wide association studies remains unclear. Since the speed and cost of sequencing complete genomes is rapidly declining, more comprehensive means of analyzing these data in concert with rare variants for genetic risk assessment and individualisation of therapy are required. Here, we present the first integrated analysis of a complete human genome in a clinical context. Methods An individual with a family history of vascular disease and early sudden death was evaluated. Clinical assessment included risk prediction for coronary artery disease, screening for causes of sudden cardiac death, and genetic counselling. Genetic analysis included the development of novel methods for the integration of whole genome sequence data including 2.6 million single nucleotide polymorphisms and 752 copy number variations. The algorithm focused on predicting genetic risk of genes associated with known Mendelian disease, recognised drug responses, and pathogenicity for novel variants. In addition, since integration of risk ratios derived from case control studies is challenging, we estimated posterior probabilities from age and sex appropriate prior probability and likelihood ratios derived for each genotype. In addition, we developed a visualisation approach to account for gene-environment interactions and conditionally dependent risks. Findings We found increased genetic risk for myocardial infarction, type II diabetes and certain cancers. Rare variants in LPA are consistent with the family history of coronary artery disease. Pharmacogenomic analysis suggested a positive response to lipid lowering therapy, likely clopidogrel resistance, and a low initial dosing requirement for warfarin. Many variants of uncertain significance were reported. Interpretation Although challenges remain, our results suggest that whole genome sequencing can yield useful and clinically relevant information for individual patients, especially for those with a strong family history of significant disease. PMID:20435227
Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo
2018-01-01
Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.
Method for detecting and avoiding flight hazards
NASA Astrophysics Data System (ADS)
von Viebahn, Harro; Schiefele, Jens
1997-06-01
Today's aircraft equipment comprise several independent warning and hazard avoidance systems like GPWS, TCAS or weather radar. It is the pilot's task to monitor all these systems and take the appropriate action in case of an emerging hazardous situation. The developed method for detecting and avoiding flight hazards combines all potential external threats for an aircraft into a single system. It is based on an aircraft surrounding airspace model consisting of discrete volume elements. For each element of the volume the threat probability is derived or computed from sensor output, databases, or information provided via datalink. The position of the own aircraft is predicted by utilizing a probability distribution. This approach ensures that all potential positions of the aircraft within the near future are considered while weighting the most likely flight path. A conflict detection algorithm initiates an alarm in case the threat probability exceeds a threshold. An escape manoeuvre is generated taking into account all potential hazards in the vicinity, not only the one which caused the alarm. The pilot gets a visual information about the type, the locating, and severeness o the threat. The algorithm was implemented and tested in a flight simulator environment. The current version comprises traffic, terrain and obstacle hazards avoidance functions. Its general formulation allows an easy integration of e.g. weather information or airspace restrictions.
Stochastic seismic inversion based on an improved local gradual deformation method
NASA Astrophysics Data System (ADS)
Yang, Xiuwei; Zhu, Peimin
2017-12-01
A new stochastic seismic inversion method based on the local gradual deformation method is proposed, which can incorporate seismic data, well data, geology and their spatial correlations into the inversion process. Geological information, such as sedimentary facies and structures, could provide significant a priori information to constrain an inversion and arrive at reasonable solutions. The local a priori conditional cumulative distributions at each node of model to be inverted are first established by indicator cokriging, which integrates well data as hard data and geological information as soft data. Probability field simulation is used to simulate different realizations consistent with the spatial correlations and local conditional cumulative distributions. The corresponding probability field is generated by the fast Fourier transform moving average method. Then, optimization is performed to match the seismic data via an improved local gradual deformation method. Two improved strategies are proposed to be suitable for seismic inversion. The first strategy is that we select and update local areas of bad fitting between synthetic seismic data and real seismic data. The second one is that we divide each seismic trace into several parts and obtain the optimal parameters for each part individually. The applications to a synthetic example and a real case study demonstrate that our approach can effectively find fine-scale acoustic impedance models and provide uncertainty estimations.
Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions
Marinelli, Fabrizio; Faraldo-Gómez, José D.
2015-01-01
We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines. PMID:26083917
de Savigny, Don; Riley, Ian; Chandramohan, Daniel; Odhiambo, Frank; Nichols, Erin; Notzon, Sam; AbouZahr, Carla; Mitra, Raj; Cobos Muñoz, Daniel; Firth, Sonja; Maire, Nicolas; Sankoh, Osman; Bronson, Gay; Setel, Philip; Byass, Peter; Jakob, Robert; Boerma, Ties; Lopez, Alan D.
2017-01-01
ABSTRACT Background: Reliable and representative cause of death (COD) statistics are essential to inform public health policy, respond to emerging health needs, and document progress towards Sustainable Development Goals. However, less than one-third of deaths worldwide are assigned a cause. Civil registration and vital statistics (CRVS) systems in low- and lower-middle-income countries are failing to provide timely, complete and accurate vital statistics, and it will still be some time before they can provide physician-certified COD for every death. Proposals: Verbal autopsy (VA) is a method to ascertain the probable COD and, although imperfect, it is the best alternative in the absence of medical certification. There is extensive experience with VA in research settings but only a few examples of its use on a large scale. Data collection using electronic questionnaires on mobile devices and computer algorithms to analyse responses and estimate probable COD have increased the potential for VA to be routinely applied in CRVS systems. However, a number of CRVS and health system integration issues should be considered in planning, piloting and implementing a system-wide intervention such as VA. These include addressing the multiplicity of stakeholders and sub-systems involved, integration with existing CRVS work processes and information flows, linking VA results to civil registration records, information technology requirements and data quality assurance. Conclusions: Integrating VA within CRVS systems is not simply a technical undertaking. It will have profound system-wide effects that should be carefully considered when planning for an effective implementation. This paper identifies and discusses the major system-level issues and emerging practices, provides a planning checklist of system-level considerations and proposes an overview for how VA can be integrated into routine CRVS systems. PMID:28137194
Beeman, John W.; Hayes, Brian; Wright, Katrina
2012-01-01
A series of in-stream passive integrated transponder (PIT) detection antennas installed across the Klamath River in August 2010 were tested using tagged fish in the summer of 2011. Six pass-by antennas were constructed and anchored to the bottom of the Klamath River at a site between the Shasta and Scott Rivers. Two of the six antennas malfunctioned during the spring of 2011 and two pass-through antennas were installed near the opposite shoreline prior to system testing. The detection probability of the PIT tag detection system was evaluated using yearling coho salmon implanted with a PIT tag and a radio transmitter and then released into the Klamath River slightly downstream of Iron Gate Dam. Cormack-Jolly-Seber capture-recapture methods were used to estimate the detection probability of the PIT tag detection system based on detections of PIT tags there and detections of radio transmitters at radio-telemetry detection systems downstream. One of the 43 PIT- and radio-tagged fish released was detected by the PIT tag detection system and 23 were detected by the radio-telemetry detection systems. The estimated detection probability of the PIT tag detection system was 0.043 (standard error 0.042). Eight PIT-tagged fish from other studies also were detected. Detections at the PIT tag detection system were at the two pass-through antennas and the pass-by antenna adjacent to them. Above average river discharge likely was a factor in the low detection probability of the PIT tag detection system. High discharges dislodged two power cables leaving 12 meters of the river width unsampled for PIT detections and resulted in water depths greater than the read distance of the antennas, which allowed fish to pass over much of the system with little chance of being detected. Improvements in detection probability may be expected under river discharge conditions where water depth over the antennas is within maximum read distance of the antennas. Improvements also may be expected if additional arrays of antennas are used.
Liu, Xuewu; Huang, Yuxiao; Liang, Jiao; Zhang, Shuai; Li, Yinghui; Wang, Jun; Shen, Yan; Xu, Zhikai; Zhao, Ya
2014-11-30
The invasion of red blood cells (RBCs) by malarial parasites is an essential step in the life cycle of Plasmodium falciparum. Human-parasite surface protein interactions play a critical role in this process. Although several interactions between human and parasite proteins have been discovered, the mechanism related to invasion remains poorly understood because numerous human-parasite protein interactions have not yet been identified. High-throughput screening experiments are not feasible for malarial parasites due to difficulty in expressing the parasite proteins. Here, we performed computational prediction of the PPIs involved in malaria parasite invasion to elucidate the mechanism by which invasion occurs. In this study, an expectation maximization algorithm was used to estimate the probabilities of domain-domain interactions (DDIs). Estimates of DDI probabilities were then used to infer PPI probabilities. We found that our prediction performance was better than that based on the information of D. melanogaster alone when information related to the six species was used. Prediction performance was assessed using protein interaction data from S. cerevisiae, indicating that the predicted results were reliable. We then used the estimates of DDI probabilities to infer interactions between 490 parasite and 3,787 human membrane proteins. A small-scale dataset was used to illustrate the usability of our method in predicting interactions between human and parasite proteins. The positive predictive value (PPV) was lower than that observed in S. cerevisiae. We integrated gene expression data to improve prediction accuracy and to reduce false positives. We identified 80 membrane proteins highly expressed in the schizont stage by fast Fourier transform method. Approximately 221 erythrocyte membrane proteins were identified using published mass spectral datasets. A network consisting of 205 interactions was predicted. Results of network analysis suggest that SNARE proteins of parasites and APP of humans may function in the invasion of RBCs by parasites. We predicted a small-scale PPI network that may be involved in parasite invasion of RBCs by integrating DDI information and expression profiles. Experimental studies should be conducted to validate the predicted interactions. The predicted PPIs help elucidate the mechanism of parasite invasion and provide directions for future experimental investigations.
Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu
2018-05-01
In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.
Simulation modeling for the health care manager.
Kennedy, Michael H
2009-01-01
This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.
Seal Integrity of Selected Fuzes as Measured by Three Leak Test Methods
1976-09-01
the worst fuze from the seal standpoint. The M503A-2 fuze body is made from a cast aluminum alloy . The casting process leaves voids which, after...leak resistance of the joint. WDU4A/A The design of this fuze depends upon ultrasonic welding to seal lid to case. The specified leak test merely...test is probably one of the better leakage tests from an effectiveness standpoint. However, from lot quantities of 690 and 480, reject rates of 20% were
Orange, V J; Robinson, D E
1999-02-01
The use of buyer/planners may becoming more popular. If they are, the reason is probably that many companies are integrating materiel management skill sets as a way of increasing the effectiveness of their supply chains. Harley-Davidson recently created a supply management function composed of buyer/planners. This article describes the method it used to achieve the transition, the training plan it implemented to support the process, and the role management played in achieving success.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morise, A.P.; Duval, R.D.
To determine whether recent refinements in Bayesian methods have led to improved diagnostic ability, 3 methods using Bayes' theorem and the independence assumption for estimating posttest probability after exercise stress testing were compared. Each method differed in the number of variables considered in the posttest probability estimate (method A = 5, method B = 6 and method C = 15). Method C is better known as CADENZA. There were 436 patients (250 men and 186 women) who underwent stress testing (135 had concurrent thallium scintigraphy) followed within 2 months by coronary arteriography. Coronary artery disease ((CAD), at least 1 vesselmore » with greater than or equal to 50% diameter narrowing) was seen in 169 (38%). Mean pretest probabilities using each method were not different. However, the mean posttest probabilities for CADENZA were significantly greater than those for method A or B (p less than 0.0001). Each decile of posttest probability was compared to the actual prevalence of CAD in that decile. At posttest probabilities less than or equal to 20%, there was underestimation of CAD. However, at posttest probabilities greater than or equal to 60%, there was overestimation of CAD by all methods, especially CADENZA. Comparison of sensitivity and specificity at every fifth percentile of posttest probability revealed that CADENZA was significantly more sensitive and less specific than methods A and B. Therefore, at lower probability thresholds, CADENZA was a better screening method. However, methods A or B still had merit as a means to confirm higher probabilities generated by CADENZA (especially greater than or equal to 60%).« less
How Can Histograms Be Useful for Introducing Continuous Probability Distributions?
ERIC Educational Resources Information Center
Derouet, Charlotte; Parzysz, Bernard
2016-01-01
The teaching of probability has changed a great deal since the end of the last century. The development of technologies is indeed part of this evolution. In France, continuous probability distributions began to be studied in 2002 by scientific 12th graders, but this subject was marginal and appeared only as an application of integral calculus.…
Calculation of precise firing statistics in a neural network model
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2017-08-01
A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bandaru, Varaprasad; Izaurralde, Roberto C.; Manowitz, David H.
2013-12-01
The use of marginal lands (MLs) for biofuel production has been contemplated as a promising solution for meeting biofuel demands. However, there have been concerns with spatial location of MLs, their inherent biofuel potential, and possible environmental consequences with the cultivation of energy crops. Here, we developed a new quantitative approach that integrates high-resolution land cover and land productivity maps and uses conditional probability density functions for analyzing land use patterns as a function of land productivity to classify the agricultural lands. We subsequently applied this method to determine available productive croplands (P-CLs) and non-crop marginal lands (NC-MLs) in amore » nine-county Southern Michigan. Furthermore, Spatially Explicit Integrated Modeling Framework (SEIMF) using EPIC (Environmental Policy Integrated Climate) was used to understand the net energy (NE) and soil organic carbon (SOC) implications of cultivating different annual and perennial production systems.« less
NASA Astrophysics Data System (ADS)
Sabzikar, Farzad; Meerschaert, Mark M.; Chen, Jinghua
2015-07-01
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.
Meerschaert, Mark M; Sabzikar, Farzad; Chen, Jinghua
2015-07-15
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.
MEERSCHAERT, MARK M.; SABZIKAR, FARZAD; CHEN, JINGHUA
2014-01-01
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series. PMID:26085690
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabzikar, Farzad, E-mail: sabzika2@stt.msu.edu; Meerschaert, Mark M., E-mail: mcubed@stt.msu.edu; Chen, Jinghua, E-mail: cjhdzdz@163.com
2015-07-15
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a temperedmore » fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.« less
Energy efficient engine: Propulsion system-aircraft integration evaluation
NASA Technical Reports Server (NTRS)
Owens, R. E.
1979-01-01
Flight performance and operating economics of future commercial transports utilizing the energy efficient engine were assessed as well as the probability of meeting NASA's goals for TSFC, DOC, noise, and emissions. Results of the initial propulsion systems aircraft integration evaluation presented include estimates of engine performance, predictions of fuel burns, operating costs of the flight propulsion system installed in seven selected advanced study commercial transports, estimates of noise and emissions, considerations of thrust growth, and the achievement-probability analysis.
NASA Astrophysics Data System (ADS)
Giraud, O.; Thain, A.; Hannay, J. H.
2004-02-01
The shrunk loop theorem proved here is an integral identity which facilitates the calculation of the relative probability (or probability amplitude) of any given topology that a free, closed Brownian (or Feynman) path of a given 'duration' might have on the twice punctured plane (plane with two marked points). The result is expressed as a 'scattering' series of integrals of increasing dimensionality based on the maximally shrunk version of the path. Physically, this applies in different contexts: (i) the topology probability of a closed ideal polymer chain on a plane with two impassable points, (ii) the trace of the Schrödinger Green function, and thence spectral information, in the presence of two Aharonov-Bohm fluxes and (iii) the same with two branch points of a Riemann surface instead of fluxes. Our theorem starts from the Stovicek scattering expansion for the Green function in the presence of two Aharonov-Bohm flux lines, which itself is based on the famous Sommerfeld one puncture point solution of 1896 (the one puncture case has much easier topology, just one winding number). Stovicek's expansion itself can supply the results at the expense of choosing a base point on the loop and then integrating it away. The shrunk loop theorem eliminates this extra two-dimensional integration, distilling the topology from the geometry.
NASA Astrophysics Data System (ADS)
Jang, Cheng-Shin; Chen, Shih-Kai
2015-04-01
Groundwater nitrate-N contamination occurs frequently in agricultural regions, primarily resulting from surface agricultural activities. The focus of this study is to establish groundwater protection zones based on indicator-based geostatistical estimation and aquifer vulnerability of nitrate-N in the Choushui River alluvial fan in Taiwan. The groundwater protection zones are determined by univariate indicator kriging (IK) estimation, aquifer vulnerability assessment using logistic regression (LR), and integration of the IK estimation and aquifer vulnerability using simple IK with local prior means (sIKlpm). First, according to the statistical significance of source, transport, and attenuation factors dominating the occurrence of nitrate-N pollution, a LR model was adopted to evaluate aquifer vulnerability and to characterize occurrence probability of nitrate-N exceeding 0.5 mg/L. Moreover, the probabilities estimated using LR were regarded as local prior means. IK was then used to estimate the actual extent of nitrate-N pollution. The integration of the IK estimation and aquifer vulnerability was obtained using sIKlpm. Finally, groundwater protection zones were probabilistically determined using the three aforementioned methods, and the estimated accuracy of the delineated groundwater protection zones was gauged using a cross-validation procedure based on observed nitrate-N data. The results reveal that the integration of the IK estimation and aquifer vulnerability using sIKlpm is more robust than univariate IK estimation and aquifer vulnerability assessment using LR for establishing groundwater protection zones. Rigorous management practices for fertilizer use should be implemented in orchards situated in the determined groundwater protection zones.
NASA Technical Reports Server (NTRS)
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control
NASA Astrophysics Data System (ADS)
Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming
2017-09-01
Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.
Hidden Markov models for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J. (Inventor)
1995-01-01
The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) (vertical bar)/x), 1 less than or equal to i isless than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.
Hidden Markov models for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J. (Inventor)
1993-01-01
The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) perpendicular to x), 1 less than or equal to i is less than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korhonen, Marko; Lee, Eunghyun
2014-01-15
We treat the N-particle zero range process whose jumping rates satisfy a certain condition. This condition is required to use the Bethe ansatz and the resulting model is the q-boson model by Sasamoto and Wadati [“Exact results for one-dimensional totally asymmetric diffusion models,” J. Phys. A 31, 6057–6071 (1998)] or the q-totally asymmetric zero range process (TAZRP) by Borodin and Corwin [“Macdonald processes,” Probab. Theory Relat. Fields (to be published)]. We find the explicit formula of the transition probability of the q-TAZRP via the Bethe ansatz. By using the transition probability we find the probability distribution of the left-most particle'smore » position at time t. To find the probability for the left-most particle's position we find a new identity corresponding to identity for the asymmetric simple exclusion process by Tracy and Widom [“Integral formulas for the asymmetric simple exclusion process,” Commun. Math. Phys. 279, 815–844 (2008)]. For the initial state that all particles occupy a single site, the probability distribution of the left-most particle's position at time t is represented by the contour integral of a determinant.« less
Using phrases and document metadata to improve topic modeling of clinical reports.
Speier, William; Ong, Michael K; Arnold, Corey W
2016-06-01
Probabilistic topic models provide an unsupervised method for analyzing unstructured text, which have the potential to be integrated into clinical automatic summarization systems. Clinical documents are accompanied by metadata in a patient's medical history and frequently contains multiword concepts that can be valuable for accurately interpreting the included text. While existing methods have attempted to address these problems individually, we present a unified model for free-text clinical documents that integrates contextual patient- and document-level data, and discovers multi-word concepts. In the proposed model, phrases are represented by chained n-grams and a Dirichlet hyper-parameter is weighted by both document-level and patient-level context. This method and three other Latent Dirichlet allocation models were fit to a large collection of clinical reports. Examples of resulting topics demonstrate the results of the new model and the quality of the representations are evaluated using empirical log likelihood. The proposed model was able to create informative prior probabilities based on patient and document information, and captured phrases that represented various clinical concepts. The representation using the proposed model had a significantly higher empirical log likelihood than the compared methods. Integrating document metadata and capturing phrases in clinical text greatly improves the topic representation of clinical documents. The resulting clinically informative topics may effectively serve as the basis for an automatic summarization system for clinical reports. Copyright © 2016 Elsevier Inc. All rights reserved.
Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.
2012-01-01
A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.
Two-Photon Transitions in Hydrogen-Like Atoms
NASA Astrophysics Data System (ADS)
Martinis, Mladen; Stojić, Marko
Different methods for evaluating two-photon transition amplitudes in hydrogen-like atoms are compared with the improved method of direct summation. Three separate contributions to the two-photon transition probabilities in hydrogen-like atoms are calculated. The first one coming from the summation over discrete intermediate states is performed up to nc(max) = 35. The second contribution from the integration over the continuum states is performed numerically. The third contribution coming from the summation from nc(max) to infinity is calculated in an approximate way using the mean level energy for this region. It is found that the choice of nc(max) controls the numerical error in the calculations and can be used to increase the accuracy of the results much more efficiently than in other methods.
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
The fourth year of technical developments on the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) system for Probabilistic Structural Analysis Methods is summarized. The effort focused on the continued expansion of the Probabilistic Finite Element Method (PFEM) code, the implementation of the Probabilistic Boundary Element Method (PBEM), and the implementation of the Probabilistic Approximate Methods (PAppM) code. The principal focus for the PFEM code is the addition of a multilevel structural dynamics capability. The strategy includes probabilistic loads, treatment of material, geometry uncertainty, and full probabilistic variables. Enhancements are included for the Fast Probability Integration (FPI) algorithms and the addition of Monte Carlo simulation as an alternate. Work on the expert system and boundary element developments continues. The enhanced capability in the computer codes is validated by applications to a turbine blade and to an oxidizer duct.
Data Analysis Recipes: Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Hogg, David W.; Foreman-Mackey, Daniel
2018-05-01
Markov Chain Monte Carlo (MCMC) methods for sampling probability density functions (combined with abundant computational resources) have transformed the sciences, especially in performing probabilistic inferences, or fitting models to data. In this primarily pedagogical contribution, we give a brief overview of the most basic MCMC method and some practical advice for the use of MCMC in real inference problems. We give advice on method choice, tuning for performance, methods for initialization, tests of convergence, troubleshooting, and use of the chain output to produce or report parameter estimates with associated uncertainties. We argue that autocorrelation time is the most important test for convergence, as it directly connects to the uncertainty on the sampling estimate of any quantity of interest. We emphasize that sampling is a method for doing integrals; this guides our thinking about how MCMC output is best used. .
NASA Astrophysics Data System (ADS)
Esposito, Carlo; Barra, Anna; Evans, Stephen G.; Scarascia Mugnozza, Gabriele; Delaney, Keith
2014-05-01
The study of landslide susceptibility by multivariate statistical methods is based on finding a quantitative relationship between controlling factors and landslide occurrence. Such studies have become popular in the last few decades thanks to the development of geographic information systems (GIS) software and the related improved data management. In this work we applied a statistical approach to an area of high landslide susceptibility mainly due to its tropical climate and geological-geomorphological setting. The study area is located in the south-east region of Brazil that has frequently been affected by flood and landslide hazard, especially because of heavy rainfall events during the summer season. In this work we studied a disastrous event that occurred on January 11th and 12th of 2011, which involved Região Serrana (the mountainous region of Rio de Janeiro State) and caused more than 5000 landslides and at least 904 deaths. In order to produce susceptibility maps, we focused our attention on an area of 93,6 km2 that includes Nova Friburgo city. We utilized two different multivariate statistic methods: Logistic Regression (LR), already widely used in applied geosciences, and Random Forest (RF), which has only recently been applied to landslide susceptibility analysis. With reference to each mapping unit, the first method (LR) results in a probability of landslide occurrence, while the second one (RF) gives a prediction in terms of % of area susceptible to slope failure. With this aim in mind, a landslide inventory map (related to the studied event) has been drawn up through analyses of high-resolution GeoEye satellite images, in a GIS environment. Data layers of 11 causative factors have been created and processed in order to be used as continuous numerical or discrete categorical variables in statistical analysis. In particular, the logistic regression method has frequent difficulties in managing numerical continuous and discrete categorical variables together; therefore in our work we tried different methods to process categorical variables , until we obtained a statistically significant model. The outcomes of the two statistical methods (RF and LR) have been tested with a spatial validation and gave us two susceptibility maps. The significance of the models is quantified in terms of Area Under ROC Curve (AUC resulted in 0.81 for RF model and in 0.72 for LR model). In the first instance, a graphical comparison of the two methods shows a good correspondence between them. Further, we integrated results in a unique susceptibility map which maintains both information of probability of occurrence and % of area of landslide detachment, resulting from LR and RF respectively. In fact, in view of a landslide susceptibility classification of the study area, the former is less accurate but gives easily classifiable results, while the latter is more accurate but the results can be only subjectively classified. The obtained "integrated" susceptibility map preserves information about the probability that a given % of area could fail for each mapping unit.
NASA Astrophysics Data System (ADS)
Schaefer, Andreas; Daniell, James; Khazai, Bijan; Wenzel, Friedemann
2016-04-01
The occurrence and impact of strong earthquakes often triggers the long-lasting impact of a seismic sequence. Strong earthquakes are generally followed by many aftershocks or even strong subsequently triggered ruptures. The Nepal 2015 earthquake sequence is one of the most recent examples where aftershocks significantly contributed to human and economic losses. In addition, rumours about upcoming mega-earthquakes, false predictions and on-going cycles of aftershocks induced a psychological burden on the society, which caused panic, additional casualties and prevented people from returning to normal life. This study shows the current phase of development of an operationalised aftershock intensity index, which will contribute to the mitigation of aftershock hazard. Hereby, various methods of earthquake forecasting and seismic risk assessments are utilised and an integration of the inherent aftershock intensity is performed. A spatio-temporal analysis of past earthquake clustering provides first-hand data about the nature of aftershock occurrence. Epidemic methods can additionally provide time-dependent variation indices of the cascading effects of aftershock generation. The aftershock hazard is often combined with the potential for significant losses through the vulnerability of structural systems and population. A historical database of aftershock socioeconomic effects from CATDAT has been used in order to calibrate the index based on observed impacts of historical events and their aftershocks. In addition, analytical analysis of cyclic behaviour and fragility functions of various building typologies are explored. The integration of many different probabilistic computation methods will provide a combined index parameter which can then be transformed into an easy-to-read spatio-temporal intensity index. The index provides daily updated information about the probability of the inherent seismic risk of aftershocks by providing a scalable scheme fordifferent aftershock intensities. These intensities define spatial locations and the temporal period when aftershocks are either probable or damaging. Instead of providing a highly scientific probability mesh-up, the aftershock intensity index is an easy-to-communicate system of intensity levels for rescue and relief organizations but also governments and the common people. For this study, the metric is tested retrospectively on the earthquake sequences of Nepal 2015 and Darfield-Christchurch of 2010/2011.
Xie, Yuanlong; Tang, Xiaoqi; Song, Bao; Zhou, Xiangdong; Guo, Yixuan
2018-04-01
In this paper, data-driven adaptive fractional order proportional integral (AFOPI) control is presented for permanent magnet synchronous motor (PMSM) servo system perturbed by measurement noise and data dropouts. The proposed method directly exploits the closed-loop process data for the AFOPI controller design under unknown noise distribution and data missing probability. Firstly, the proposed method constructs the AFOPI controller tuning problem as a parameter identification problem using the modified l p norm virtual reference feedback tuning (VRFT). Then, iteratively reweighted least squares is integrated into the l p norm VRFT to give a consistent compensation solution for the AFOPI controller. The measurement noise and data dropouts are estimated and eliminated by feedback compensation periodically, so that the AFOPI controller is updated online to accommodate the time-varying operating conditions. Moreover, the convergence and stability are guaranteed by mathematical analysis. Finally, the effectiveness of the proposed method is demonstrated both on simulations and experiments implemented on a practical PMSM servo system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Are there common mathematical structures in economics and physics?
NASA Astrophysics Data System (ADS)
Mimkes, Jürgen
2016-12-01
Economics is a field that looks into the future. We may know a few things ahead (ex ante), but most things we only know, afterwards (ex post). How can we work in a field, where much of the important information is missing? Mathematics gives two answers: 1. Probability theory leads to microeconomics: the Lagrange function optimizes utility under constraints of economic terms (like costs). The utility function is the entropy, the logarithm of probability. The optimal result is given by a probability distribution and an integrating factor. 2. Calculus leads to macroeconomics: In economics we have two production factors, capital and labour. This requires two dimensional calculus with exact and not-exact differentials, which represent the "ex ante" and "ex post" terms of economics. An integrating factor turns a not-exact term (like income) into an exact term (entropy, the natural production function). The integrating factor is the same as in microeconomics and turns the not-exact field of economics into an exact physical science.
Switching probability of all-perpendicular spin valve nanopillars
NASA Astrophysics Data System (ADS)
Tzoufras, M.
2018-05-01
In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.
NASA Technical Reports Server (NTRS)
Kerstman, Eric; Saile, Lynn; Freire de Carvalho, Mary; Myers, Jerry; Walton, Marlei; Butler, Douglas; Lopez, Vilma
2011-01-01
Introduction The Integrated Medical Model (IMM) is a decision support tool that is useful to space flight mission managers and medical system designers in assessing risks and optimizing medical systems. The IMM employs an evidence-based, probabilistic risk assessment (PRA) approach within the operational constraints of space flight. Methods Stochastic computational methods are used to forecast probability distributions of medical events, crew health metrics, medical resource utilization, and probability estimates of medical evacuation and loss of crew life. The IMM can also optimize medical kits within the constraints of mass and volume for specified missions. The IMM was used to forecast medical evacuation and loss of crew life probabilities, as well as crew health metrics for a near-earth asteroid (NEA) mission. An optimized medical kit for this mission was proposed based on the IMM simulation. Discussion The IMM can provide information to the space program regarding medical risks, including crew medical impairment, medical evacuation and loss of crew life. This information is valuable to mission managers and the space medicine community in assessing risk and developing mitigation strategies. Exploration missions such as NEA missions will have significant mass and volume constraints applied to the medical system. Appropriate allocation of medical resources will be critical to mission success. The IMM capability of optimizing medical systems based on specific crew and mission profiles will be advantageous to medical system designers. Conclusion The IMM is a decision support tool that can provide estimates of the impact of medical events on human space flight missions, such as crew impairment, evacuation, and loss of crew life. It can be used to support the development of mitigation strategies and to propose optimized medical systems for specified space flight missions. Learning Objectives The audience will learn how an evidence-based decision support tool can be used to help assess risk, develop mitigation strategies, and optimize medical systems for exploration space flight missions.
Spiesberger, John L
2013-02-01
The hypothesis tested is that internal gravity waves limit the coherent integration time of sound at 1346 km in the Pacific ocean at 133 Hz and a pulse resolution of 0.06 s. Six months of continuous transmissions at about 18 min intervals are examined. The source and receiver are mounted on the bottom of the ocean with timing governed by atomic clocks. Measured variability is only due to fluctuations in the ocean. A model for the propagation of sound through fluctuating internal waves is run without any tuning with data. Excellent resemblance is found between the model and data's probability distributions of integration time up to five hours.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
LANDMARK-BASED SPEECH RECOGNITION: REPORT OF THE 2004 JOHNS HOPKINS SUMMER WORKSHOP.
Hasegawa-Johnson, Mark; Baker, James; Borys, Sarah; Chen, Ken; Coogan, Emily; Greenberg, Steven; Juneja, Amit; Kirchhoff, Katrin; Livescu, Karen; Mohan, Srividya; Muller, Jennifer; Sonmez, Kemal; Wang, Tianyu
2005-01-01
Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multiframe acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.
Functional modules by relating protein interaction networks and gene expression.
Tornow, Sabine; Mewes, H W
2003-11-01
Genes and proteins are organized on the basis of their particular mutual relations or according to their interactions in cellular and genetic networks. These include metabolic or signaling pathways and protein interaction, regulatory or co-expression networks. Integrating the information from the different types of networks may lead to the notion of a functional network and functional modules. To find these modules, we propose a new technique which is based on collective, multi-body correlations in a genetic network. We calculated the correlation strength of a group of genes (e.g. in the co-expression network) which were identified as members of a module in a different network (e.g. in the protein interaction network) and estimated the probability that this correlation strength was found by chance. Groups of genes with a significant correlation strength in different networks have a high probability that they perform the same function. Here, we propose evaluating the multi-body correlations by applying the superparamagnetic approach. We compare our method to the presently applied mean Pearson correlations and show that our method is more sensitive in revealing functional relationships.
Functional modules by relating protein interaction networks and gene expression
Tornow, Sabine; Mewes, H. W.
2003-01-01
Genes and proteins are organized on the basis of their particular mutual relations or according to their interactions in cellular and genetic networks. These include metabolic or signaling pathways and protein interaction, regulatory or co-expression networks. Integrating the information from the different types of networks may lead to the notion of a functional network and functional modules. To find these modules, we propose a new technique which is based on collective, multi-body correlations in a genetic network. We calculated the correlation strength of a group of genes (e.g. in the co-expression network) which were identified as members of a module in a different network (e.g. in the protein interaction network) and estimated the probability that this correlation strength was found by chance. Groups of genes with a significant correlation strength in different networks have a high probability that they perform the same function. Here, we propose evaluating the multi-body correlations by applying the superparamagnetic approach. We compare our method to the presently applied mean Pearson correlations and show that our method is more sensitive in revealing functional relationships. PMID:14576317
Computation of rare transitions in the barotropic quasi-geostrophic equations
NASA Astrophysics Data System (ADS)
Laurie, Jason; Bouchet, Freddy
2015-01-01
We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.
Maximizing Information Diffusion in the Cyber-physical Integrated Network †
Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan
2015-01-01
Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks. PMID:26569254
On-the-fly Doppler broadening of unresolved resonance region cross sections
Walsh, Jonathan A.; Forget, Benoit; Smith, Kord S.; ...
2017-07-29
In this paper, two methods for computing temperature-dependent unresolved resonance region cross sections on-the-fly within continuous-energy Monte Carlo neutron transport simulations are presented. The first method calculates Doppler broadened cross sections directly from zero-temperature average resonance parameters. In a simulation, at each event that requires cross section values, a realization of unresolved resonance parameters is generated about the desired energy and temperature-dependent single-level Breit-Wigner resonance cross sections are computed directly via the analytical Ψ-x Doppler integrals. The second method relies on the generation of equiprobable cross section magnitude bands on an energy-temperature mesh. Within a simulation, the bands are sampledmore » and interpolated in energy and temperature to obtain cross section values on-the-fly. Both of the methods, as well as their underlying calculation procedures, are verified numerically in extensive code-to-code comparisons. Energy-dependent pointwise cross sections calculated with the newly-implemented procedures are shown to be in excellent agreement with those calculated by a widely-used nuclear data processing code. Relative differences at or below 0.1% are observed. Integral criticality benchmark results computed with the proposed methods are shown to reproduce those computed with a state-of-the-art processed nuclear data library very well. In simulations of fast spectrum systems which are highly-sensitive to the representation of cross section data in the unresolved region, k-eigenvalue and neutron flux spectra differences of <10 pcm and <1.0% are observed, respectively. The direct method is demonstrated to be well-suited to the calculation of reference solutions — against which results obtained with a discretized representation may be assessed — as a result of its treatment of the energy, temperature, and cross section magnitude variables as continuous. Also, because there is no pre-processed data to store (only temperature-independent average resonance parameters) the direct method is very memory-efficient. Typically, only a few kB of memory are needed to store all required unresolved region data for a single nuclide. However, depending on the details of a particular simulation, performing URR cross section calculations on-the-fly can significantly increase simulation times. Alternatively, the method of interpolating equiprobable probability bands is demonstrated to produce results that are as accurate as the direct reference solutions, to within arbitrary precision, with high computational efficiency in terms of memory requirements and simulation time. Analyses of a fast spectrum system show that interpolation on a coarse energy-temperature mesh can be used to reproduce reference k-eigenvalue results obtained with cross sections calculated continuously in energy and directly at an exact temperature to within <10 pcm. Probability band data on a mesh encompassing the range of temperatures relevant to reactor analysis usually require around 100 kB of memory per nuclide. Finally, relative to the case in which probability table data generated at a single, desired temperature are used, minor increases in simulation times are observed when probability band interpolation is employed.« less
1981-06-01
for a de- tection probability of PD and associated false alarm probability PFA (in dB). 21 - - - II V. REFERENCE MODEL A. INTRODUCTION In order to...space for which to choose HI . PFA = P (wI 0o)dw = Q(---) (26) j 0 Similarity, the miss probability=l-detection probability is obtained by integrating...31) = 2 (1+ (22 [()BT z] ~Z The input signal-to-noise ratio: S/N(input) - a2 (32) The probability of false alarm: PFA = Q[ tB(j-I) 1 (33) The
Parameter redundancy in discrete state-space and integrated models.
Cole, Diana J; McCrea, Rachel S
2016-09-01
Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Evaluation of Mean and Variance Integrals without Integration
ERIC Educational Resources Information Center
Joarder, A. H.; Omar, M. H.
2007-01-01
The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…
Melissa's Year in Sixth Grade: A Technology Integration Vignette.
ERIC Educational Resources Information Center
Hemmer, Jeanie
1998-01-01
In 1995, rather than require seventh-grade computer literacy classes, Texas allowed school districts to integrate technology skills into curricula. This article, the first of three, describes technology integration for sixth grade. Includes unit ideas on nations; the Holocaust; Olympic diving; Christmas; probability; organisms; Antarctica;…
Migration of comets to the terrestrial planets
NASA Astrophysics Data System (ADS)
Ipatov, Sergei I.; Mather, John C.
2007-05-01
The orbital evolution of 30,000 objects with initial orbits close to those of Jupiter-family comets (JFCs) and also of 15,000 dust particles was integrated [1-3]. For initial orbital elements close to those of Comets 2P, 10P, 44P, and 113P, a few objects got Earth-crossing orbits with semi-major axes a<2 AU and aphelion distances Q<4.2 AU, or even got inner-Earth (Q<0.983 AU), Aten, or typical asteroidal orbits, and moved in such orbits for more than 1 Myr (up to tens or even hundreds of Myrs). Most of former trans-Neptunian objects that have typical near-Earth object (NEO) orbits moved in such orbits for Myrs, so during most of this time they were extinct comets. From a dynamical point of view, the fraction of extinct comets among NEOs can exceed several tens of percent, but, probably, many extinct comets disintegrated into mini-comets and dust during a smaller part of their dynamical lifetimes if these lifetimes were large. The probability of the collision of Comet 10P with the Earth during a dynamical lifetime of the comet was P[E]≈1.4•10-4, but 80% of this mean probability was due only to one object among 2600 considered objects with orbits close to that of Comet 10P. For runs for Comet 2P, P[E]≈(1-5)•10-4. For most other considered JFCs, 10-6 < P[E] < 10-5. For Comets 22P and 39P, P[E]≈ (1-2)•10-6; and for Comets 9P, 28P and 44P, P[E]≈(2-5)•10-6. For all considered JFCs, P[E]>4•10-6. The Bulirsh-Stoer method of integration and a symplectic method gave similar results. In our runs the probability of a collision of one object with the Earth could be greater than the sum of probabilities for thousands of other objects. The ratios of probabilities of collisions of JFCs with Venus and Mars to the mass of a planet usually were not smaller than that for Earth. For dust particles started from comets and asteroids, P[E ]was maximum for diameters d~100 μm. These maximum values of P [E] were usually (exclusive for 2P) greater at least by an order of magnitude than the values for parent comets. [1] Ipatov S.I. and Mather J.C. (2004) Annals of the New York Acad. of Sci., v. 1017, 46-65. [2] Ipatov S.I. et al. (2004) Annals of the New York Acad. of Sci., v. 1017, 66-80. [3] Ipatov S.I. and Mather J.C. (2006) Adv. in Space Res., v. 37, N 1, 126-137.
Anderson, James S M; Ayers, Paul W
2018-06-30
Generalizing our recent work on relativistic generalizations of the quantum theory of atoms in molecules, we present the general setting under which the principle of stationary action for a region leads to open quantum subsystems. The approach presented here is general and works for any Hamiltonian, and when a reasonable Lagrangian is selected, it often leads to the integral of the Laplacian of the electron density on the region vanishing as a necessary condition for the zero-flux surface. Alternatively, with this method, one can design a Lagrangian that leads to a surface of interest (though this Lagrangian may not be, and indeed probably will not be, "reasonable"). For any reasonable Lagrangian for the electronic wave function and any two-component method (related by integration by parts to the Hamiltonian) considered, the Bader definition of an atom is recaptured. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Fast numerics for the spin orbit equation with realistic tidal dissipation and constant eccentricity
NASA Astrophysics Data System (ADS)
Bartuccelli, Michele; Deane, Jonathan; Gentile, Guido
2017-08-01
We present an algorithm for the rapid numerical integration of a time-periodic ODE with a small dissipation term that is C^1 in the velocity. Such an ODE arises as a model of spin-orbit coupling in a star/planet system, and the motivation for devising a fast algorithm for its solution comes from the desire to estimate probability of capture in various solutions, via Monte Carlo simulation: the integration times are very long, since we are interested in phenomena occurring on timescales of the order of 10^6-10^7 years. The proposed algorithm is based on the high-order Euler method which was described in Bartuccelli et al. (Celest Mech Dyn Astron 121(3):233-260, 2015), and it requires computer algebra to set up the code for its implementation. The payoff is an overall increase in speed by a factor of about 7.5 compared to standard numerical methods. Means for accelerating the purely numerical computation are also discussed.
Yu, Han; Hageman Blair, Rachael
2016-01-01
Understanding community structure in networks has received considerable attention in recent years. Detecting and leveraging community structure holds promise for understanding and potentially intervening with the spread of influence. Network features of this type have important implications in a number of research areas, including, marketing, social networks, and biology. However, an overwhelming majority of traditional approaches to community detection cannot readily incorporate information of node attributes. Integrating structural and attribute information is a major challenge. We propose a exible iterative method; inverse regularized Markov Clustering (irMCL), to network clustering via the manipulation of the transition probability matrix (aka stochastic flow) corresponding to a graph. Similar to traditional Markov Clustering, irMCL iterates between "expand" and "inflate" operations, which aim to strengthen the intra-cluster flow, while weakening the inter-cluster flow. Attribute information is directly incorporated into the iterative method through a sigmoid (logistic function) that naturally dampens attribute influence that is contradictory to the stochastic flow through the network. We demonstrate advantages and the exibility of our approach using simulations and real data. We highlight an application that integrates breast cancer gene expression data set and a functional network defined via KEGG pathways reveal significant modules for survival.
NASA Astrophysics Data System (ADS)
Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.
2012-04-01
Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.
Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach
NASA Astrophysics Data System (ADS)
Alba, Vincenzo
By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.
Traits Without Borders: Integrating Functional Diversity Across Scales.
Carmona, Carlos P; de Bello, Francesco; Mason, Norman W H; Lepš, Jan
2016-05-01
Owing to the conceptual complexity of functional diversity (FD), a multitude of different methods are available for measuring it, with most being operational at only a small range of spatial scales. This causes uncertainty in ecological interpretations and limits the potential to generalize findings across studies or compare patterns across scales. We solve this problem by providing a unified framework expanding on and integrating existing approaches. The framework, based on trait probability density (TPD), is the first to fully implement the Hutchinsonian concept of the niche as a probabilistic hypervolume in estimating FD. This novel approach could revolutionize FD-based research by allowing quantification of the various FD components from organismal to macroecological scales, and allowing seamless transitions between scales. Copyright © 2016 Elsevier Ltd. All rights reserved.
Origin of the enhancement of tunneling probability in the nearly integrable system
NASA Astrophysics Data System (ADS)
Hanada, Yasutaka; Shudo, Akira; Ikeda, Kensuke S.
2015-04-01
The enhancement of tunneling probability in the nearly integrable system is closely examined, focusing on tunneling splittings plotted as a function of the inverse of the Planck's constant. On the basis of the analysis using the absorber which efficiently suppresses the coupling, creating spikes in the plot, we found that the splitting curve should be viewed as the staircase-shaped skeleton accompanied by spikes. We further introduce renormalized integrable Hamiltonians and explore the origin of such a staircase structure by investigating the nature of eigenfunctions closely. It is found that the origin of the staircase structure could trace back to the anomalous structure in tunneling tail which manifests itself in the representation using renormalized action bases. This also explains the reason why the staircase does not appear in the completely integrable system.
Davis, Teri D; Campbell, Duncan G; Bonner, Laura M; Bolkan, Cory R; Lanto, Andrew; Chaney, Edmund F; Waltz, Thomas; Zivin, Kara; Yano, Elizabeth M; Rubenstein, Lisa V
Depression is the most prevalent mental health condition in primary care (PC). Yet as the Veterans Health Administration increases resources for PC/mental health integration, including integrated care for women, there is little detailed information about depression care needs, preferences, comorbidity, and access patterns among women veterans with depression followed in PC. We sampled patients regularly engaged with Veterans Health Administration PC. We screened 10,929 (10,580 men, 349 women) with the two-item Patient Health Questionnaire. Of the 2,186 patients who screened positive (2,092 men, 94 women), 2,017 men and 93 women completed the full Patient Health Questionnaire-9 depression screening tool. Ultimately, 46 women and 715 men with probable major depression were enrolled and completed a baseline telephone survey. We conducted descriptive statistics to provide information about the depression care experiences of women veterans and to examine potential gender differences at baseline and at seven month follow-up across study variables. Among those patients who agreed to screening, 20% of women (70 of 348) had probable major depression, versus only 12% of men (1,243 of 10,505). Of the women, 48% had concurrent probable posttraumatic stress disorder and 65% reported general anxiety. Women were more likely to receive adequate depression care than men (57% vs. 39%, respectively; p < .05); 46% of women and 39% of men reported depression symptom improvement at the 7-month follow-up. Women veterans were less likely than men to prefer care from a PC physician (p < .01) at baseline and were more likely than men to report mental health specialist care (p < .01) in the 6 months before baseline. PC/mental health integration planners should consider methods for accommodating women veterans unique care needs and preferences for mental health care delivered by health care professionals other than physicians. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Madaras, Eric I.; Bridal, S. L.; Holland, Mark R.; Handley, Scott M.; Miller, James G.
1993-01-01
The anisotropy of polar backscatter from graphite/epoxy composites is a potentially useful parameter for the characterization of porosity levels. However, the effects of release-cloth impressions on measured integrated polar backscatter levels are sufficient to inhibit the detection of porosity with this method. Recently, we developed a theoretical model to predict the frequency distribution of the backscattered power along the high-symmetry directions of release-cloth impressions. This study investigates experimentally the usefulness of limiting the bandwidth to regions not dominated by the scattering from the surface impressions, hence increasing the probability of detecting flaws such as porosity.
Ghosh, Sujit K
2010-01-01
Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.
Dynamic Blowout Risk Analysis Using Loss Functions.
Abimbola, Majeed; Khan, Faisal
2018-02-01
Most risk analysis approaches are static; failing to capture evolving conditions. Blowout, the most feared accident during a drilling operation, is a complex and dynamic event. The traditional risk analysis methods are useful in the early design stage of drilling operation while falling short during evolving operational decision making. A new dynamic risk analysis approach is presented to capture evolving situations through dynamic probability and consequence models. The dynamic consequence models, the focus of this study, are developed in terms of loss functions. These models are subsequently integrated with the probability to estimate operational risk, providing a real-time risk analysis. The real-time evolving situation is considered dependent on the changing bottom-hole pressure as drilling progresses. The application of the methodology and models are demonstrated with a case study of an offshore drilling operation evolving to a blowout. © 2017 Society for Risk Analysis.
Stationary swarming motion of active Brownian particles in parabolic external potential
NASA Astrophysics Data System (ADS)
Zhu, Wei Qiu; Deng, Mao Lin
2005-08-01
We investigate the stationary swarming motion of active Brownian particles in parabolic external potential and coupled to its mass center. Using Monte Carlo simulation we first show that the mass center approaches to rest after a sufficient long period of time. Thus, all the particles of a swarm have identical stationary motion relative to the mass center. Then the stationary probability density obtained by using the stochastic averaging method for quasi integrable Hamiltonian systems in our previous paper for the motion in 4-dimensional phase space of single active Brownian particle with Rayleigh friction model in parabolic potential is used to describe the relative stationary motion of each particle of the swarm and to obtain more probability densities including that for the total energy of the swarm. The analytical results are confirmed by comparing with those from simulation and also shown to be consistent with the existing deterministic exact steady-state solution.
Finding Kuiper Belt Objects Below the Detection Limit
NASA Astrophysics Data System (ADS)
Whidden, Peter; Kalmbach, Bryce; Bektesevic, Dino; Connolly, Andrew; Jones, Lynne; Smotherman, Hayden; Becker, Andrew
2018-01-01
We demonstrate a novel approach for uncovering the signatures of moving objects (e.g. Kuiper Belt Objects) below the detection thresholds of single astronomical images. To do so, we will employ a matched filter moving at specific rates of proposed orbits through a time-domain dataset. This is analogous to the better-known "shift-and-stack" method; however it uses neither direct shifting nor stacking of the image pixels. Instead of resampling the raw pixels to create an image stack, we will instead integrate the object detection probabilities across multiple single-epoch images to accrue support for a proposed orbit. The filtering kernel provides a measure of the probability that an object is present along a given orbit, and enables the user to make principled decisions about when the search has been successful, and when it may be terminated. The results we present here utilize GPUs to speed up the search by two orders of magnitudes over CPU implementations.
Oscillator strengths and branching fractions of 4d75p-4d75s Rh II transitions
NASA Astrophysics Data System (ADS)
Bouazza, Safa
2017-01-01
This work reports semi-empirical determination of oscillator strengths, transition probabilities and branching fractions for Rh II 4d75p-4d75s transitions in a wide wavelength range. The angular coefficients of the transition matrix, beforehand obtained in pure SL coupling with help of Racah algebra are transformed into intermediate coupling using eigenvector amplitudes of these two configuration levels determined for this purpose; The transition integral was treated as free parameter in the least squares fit to experimental oscillator strength (gf) values found in literature. The extracted value: <4d75s|r1|4d75p> =2.7426 ± 0.0007 is slightly smaller than that computed by means of ab-initio method. Subsequently to oscillator strength evaluations, transition probabilities and branching fractions were deduced and compared to those obtained experimentally or through another approach like pseudo-relativistic Hartree-Fock model including core-polarization effects.
Wei, Wei; Larrey-Lassalle, Pyrène; Faure, Thierry; Dumoulin, Nicolas; Roux, Philippe; Mathias, Jean-Denis
2016-03-01
Comparative decision making process is widely used to identify which option (system, product, service, etc.) has smaller environmental footprints and for providing recommendations that help stakeholders take future decisions. However, the uncertainty problem complicates the comparison and the decision making. Probability-based decision support in LCA is a way to help stakeholders in their decision-making process. It calculates the decision confidence probability which expresses the probability of a option to have a smaller environmental impact than the one of another option. Here we apply the reliability theory to approximate the decision confidence probability. We compare the traditional Monte Carlo method with a reliability method called FORM method. The Monte Carlo method needs high computational time to calculate the decision confidence probability. The FORM method enables us to approximate the decision confidence probability with fewer simulations than the Monte Carlo method by approximating the response surface. Moreover, the FORM method calculates the associated importance factors that correspond to a sensitivity analysis in relation to the probability. The importance factors allow stakeholders to determine which factors influence their decision. Our results clearly show that the reliability method provides additional useful information to stakeholders as well as it reduces the computational time.
Response statistics of rotating shaft with non-linear elastic restoring forces by path integration
NASA Astrophysics Data System (ADS)
Gaidai, Oleg; Naess, Arvid; Dimentberg, Michael
2017-07-01
Extreme statistics of random vibrations is studied for a Jeffcott rotor under uniaxial white noise excitation. Restoring force is modelled as elastic non-linear; comparison is done with linearized restoring force to see the force non-linearity effect on the response statistics. While for the linear model analytical solutions and stability conditions are available, it is not generally the case for non-linear system except for some special cases. The statistics of non-linear case is studied by applying path integration (PI) method, which is based on the Markov property of the coupled dynamic system. The Jeffcott rotor response statistics can be obtained by solving the Fokker-Planck (FP) equation of the 4D dynamic system. An efficient implementation of PI algorithm is applied, namely fast Fourier transform (FFT) is used to simulate dynamic system additive noise. The latter allows significantly reduce computational time, compared to the classical PI. Excitation is modelled as Gaussian white noise, however any kind distributed white noise can be implemented with the same PI technique. Also multidirectional Markov noise can be modelled with PI in the same way as unidirectional. PI is accelerated by using Monte Carlo (MC) estimated joint probability density function (PDF) as initial input. Symmetry of dynamic system was utilized to afford higher mesh resolution. Both internal (rotating) and external damping are included in mechanical model of the rotor. The main advantage of using PI rather than MC is that PI offers high accuracy in the probability distribution tail. The latter is of critical importance for e.g. extreme value statistics, system reliability, and first passage probability.
Gao, Xiaolei; Wei, Jianjian; Lei, Hao; Xu, Pengcheng; Cowling, Benjamin J; Li, Yuguo
2016-01-01
Emerging diseases may spread rapidly through dense and large urban contact networks, especially they are transmitted by the airborne route, before new vaccines can be made available. Airborne diseases may spread rapidly as people visit different indoor environments and are in frequent contact with others. We constructed a simple indoor contact model for an ideal city with 7 million people and 3 million indoor spaces, and estimated the probability and duration of contact between any two individuals during one day. To do this, we used data from actual censuses, social behavior surveys, building surveys, and ventilation measurements in Hong Kong to define eight population groups and seven indoor location groups. Our indoor contact model was integrated with an existing epidemiological Susceptible, Exposed, Infectious, and Recovered (SEIR) model to estimate disease spread and with the Wells-Riley equation to calculate local infection risks, resulting in an integrated indoor transmission network model. This model was used to estimate the probability of an infected individual infecting others in the city and to study the disease transmission dynamics. We predicted the infection probability of each sub-population under different ventilation systems in each location type in the case of a hypothetical airborne disease outbreak, which is assumed to have the same natural history and infectiousness as smallpox. We compared the effectiveness of controlling ventilation in each location type with other intervention strategies. We conclude that increasing building ventilation rates using methods such as natural ventilation in classrooms, offices, and homes is a relatively effective strategy for airborne diseases in a large city.
NASA Astrophysics Data System (ADS)
Keiler, M.
2003-04-01
Reports on catastrophes with high damage caused by natural hazards seem to have increased in number recently. A new trend in dealing with these natural processes leads to the integration of risk into natural hazards evaluations and approaches of integral risk management. The risk resulting from natural hazards can be derived from the combination of parameters of physical processes (intensity and recurrence probability) and damage potential (probability of presence and expected damage value). Natural hazard research focuses mainly on the examination, modelling and estimation of individual geomorphological processes as well as on future developments caused by climate change. Even though damage potential has been taken into account more frequently, quantifying statements are still missing. Due to the changes of the socio-economic structures in mountain regions (urban sprawl, population growth, increased mobility and tourism) these studies are mandatory. This study presents a conceptual method that records the damage potential (probability of physical presence, evaluation of buildings) and shows the development of the damage potential resulting from avalanches since 1950. The study area is the community of Galtür, Austria. 36 percent of the existing buildings are found in officially declared avalanche hazard zones. The majority of these buildings are either agricultural or accommodation facilities. Additionally, the effects of physical planning and/or technical measures on the spatial development of the potential damage are illustrated. The results serve to improve risk determination and point out an unnoticed increase of damage potential and risk in apparently safe settlement areas.
A comparison of the weights-of-evidence method and probabilistic neural networks
Singer, Donald A.; Kouda, Ryoichi
1999-01-01
The need to integrate large quantities of digital geoscience information to classify locations as mineral deposits or nondeposits has been met by the weights-of-evidence method in many situations. Widespread selection of this method may be more the result of its ease of use and interpretation rather than comparisons with alternative methods. A comparison of the weights-of-evidence method to probabilistic neural networks is performed here with data from Chisel Lake-Andeson Lake, Manitoba, Canada. Each method is designed to estimate the probability of belonging to learned classes where the estimated probabilities are used to classify the unknowns. Using these data, significantly lower classification error rates were observed for the neural network, not only when test and training data were the same (0.02 versus 23%), but also when validation data, not used in any training, were used to test the efficiency of classification (0.7 versus 17%). Despite these data containing too few deposits, these tests of this set of data demonstrate the neural network's ability at making unbiased probability estimates and lower error rates when measured by number of polygons or by the area of land misclassified. For both methods, independent validation tests are required to ensure that estimates are representative of real-world results. Results from the weights-of-evidence method demonstrate a strong bias where most errors are barren areas misclassified as deposits. The weights-of-evidence method is based on Bayes rule, which requires independent variables in order to make unbiased estimates. The chi-square test for independence indicates no significant correlations among the variables in the Chisel Lake–Andeson Lake data. However, the expected number of deposits test clearly demonstrates that these data violate the independence assumption. Other, independent simulations with three variables show that using variables with correlations of 1.0 can double the expected number of deposits as can correlations of −1.0. Studies done in the 1970s on methods that use Bayes rule show that moderate correlations among attributes seriously affect estimates and even small correlations lead to increases in misclassifications. Adverse effects have been observed with small to moderate correlations when only six to eight variables were used. Consistent evidence of upward biased probability estimates from multivariate methods founded on Bayes rule must be of considerable concern to institutions and governmental agencies where unbiased estimates are required. In addition to increasing the misclassification rate, biased probability estimates make classification into deposit and nondeposit classes an arbitrary subjective decision. The probabilistic neural network has no problem dealing with correlated variables—its performance depends strongly on having a thoroughly representative training set. Probabilistic neural networks or logistic regression should receive serious consideration where unbiased estimates are required. The weights-of-evidence method would serve to estimate thresholds between anomalies and background and for exploratory data analysis.
How Historical Information Can Improve Extreme Value Analysis of Coastal Water Levels
NASA Astrophysics Data System (ADS)
Le Cozannet, G.; Bulteau, T.; Idier, D.; Lambert, J.; Garcin, M.
2016-12-01
The knowledge of extreme coastal water levels is useful for coastal flooding studies or the design of coastal defences. While deriving such extremes with standard analyses using tide gauge measurements, one often needs to deal with limited effective duration of observation which can result in large statistical uncertainties. This is even truer when one faces outliers, those particularly extreme values distant from the others. In a recent work (Bulteau et al., 2015), we investigated how historical information of past events reported in archives can reduce statistical uncertainties and relativize such outlying observations. We adapted a Bayesian Markov Chain Monte Carlo method, initially developed in the hydrology field (Reis and Stedinger, 2005), to the specific case of coastal water levels. We applied this method to the site of La Rochelle (France), where the storm Xynthia in 2010 generated a water level considered so far as an outlier. Based on 30 years of tide gauge measurements and 8 historical events since 1890, the results showed a significant decrease in statistical uncertainties on return levels when historical information is used. Also, Xynthia's water level no longer appeared as an outlier and we could have reasonably predicted the annual exceedance probability of that level beforehand (predictive probability for 2010 based on data until the end of 2009 of the same order of magnitude as the standard estimative probability using data until the end of 2010). Such results illustrate the usefulness of historical information in extreme value analyses of coastal water levels, as well as the relevance of the proposed method to integrate heterogeneous data in such analyses.
Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehlert, Kurt; Loewe, Laurence, E-mail: loewe@wisc.edu; Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin 53715
2014-11-28
To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution.more » Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.« less
NASA Astrophysics Data System (ADS)
Gilani, Seyed-Omid; Sattarvand, Javad
2016-02-01
Meeting production targets in terms of ore quantity and quality is critical for a successful mining operation. In-situ grade uncertainty causes both deviations from production targets and general financial deficits. A new stochastic optimization algorithm based on ant colony optimization (ACO) approach is developed herein to integrate geological uncertainty described through a series of the simulated ore bodies. Two different strategies were developed based on a single predefined probability value (Prob) and multiple probability values (Pro bnt) , respectively in order to improve the initial solutions that created by deterministic ACO procedure. Application at the Sungun copper mine in the northwest of Iran demonstrate the abilities of the stochastic approach to create a single schedule and control the risk of deviating from production targets over time and also increase the project value. A comparison between two strategies and traditional approach illustrates that the multiple probability strategy is able to produce better schedules, however, the single predefined probability is more practical in projects requiring of high flexibility degree.
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
NASA Astrophysics Data System (ADS)
Troudi, Molka; Alimi, Adel M.; Saoudi, Samir
2008-12-01
The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
Religious Attendance and Loneliness in Later Life
Rote, Sunshine; Hill, Terrence D.; Ellison, Christopher G.
2013-01-01
Purpose of the Study: Studies show that loneliness is a major risk factor for health issues in later life. Although research suggests that religious involvement can protect against loneliness, explanations for this general pattern are underdeveloped and undertested. In this paper, we propose and test a theoretical model, which suggests that social integration and social support are key mechanisms that link religious attendance and loneliness. Design and Methods: To formally test our theoretical model, we use data from the National Social Life Health and Aging Project (2005/2006), a large national probability sample of older adults aged 57–85 years. Results: We find that religious attendance is associated with higher levels of social integration and social support and that social integration and social support are associated with lower levels of loneliness. A series of mediation tests confirm our theoretical model. Implications: Taken together, our results suggest that involvement in religious institutions may protect against loneliness in later life by integrating older adults into larger and more supportive social networks. Future research should test whether these processes are valid across theoretically relevant subgroups. PMID:22555887
Strawberries from integrated and organic production: mineral contents and antioxidant activity.
Kristl, Janja; Krajnc, Andreja Urbanek; Kramberger, Branko; Mlakar, Silva Grobelnik
2013-01-01
As the nutritional quality of food is becoming increasingly more important for consumers, significant attention needs to be devoted to agricultural practices and their influences on the nutrient contents in food. The presented investigation studied the mineral contents and antioxidant activities in the fruits of four organically-grown strawberry cultivars 'St. Pierre', 'Elsanta', 'Sugar Lia' and 'Thuchampion' when compared to those of integrated-grown plants. The strawberries were digested and analyzed for K, Mg, Fe, Zn, Cu, and Mn using an atomic absorption spectrometer, whilst P was analyzed using a vanadate-molybdate method. In addition, antioxidant activity was estimated by using the ABTS assay. The results showed that the mineral contents and antioxidant activities in strawberries depends on the cultivar, and its production system. Organically-grown fruits showed higher antioxidant activities and Cu content than the integrated fruits, whilst the integrated fruits were superior in their contents of P, K, Mg, Fe and Mn. All the cultivars showed similar Zn content, probably reflecting the fact that the Zn content in strawberries does not depend on the cultivar.
2014-01-01
Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism evaluation based on circuit simulation method excludes most of subgraphs which are not probability isomorphism and reduces the search space of the probability isomorphism subgraphs using the mismatch values in the node voltage set. It is an innovative way to find the frequent probability patterns, which can be efficiently applied to probability motif discovery problems in the further studies. PMID:25350277
Exact transition probabilities in a 6-state Landau–Zener system with path interference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinitsyn, Nikolai A.
2015-04-23
In this paper, we identify a nontrivial multistate Landau–Zener (LZ) model for which transition probabilities between any pair of diabatic states can be determined analytically and exactly. In the semiclassical picture, this model features the possibility of interference of different trajectories that connect the same initial and final states. Hence, transition probabilities are generally not described by the incoherent successive application of the LZ formula. Finally, we discuss reasons for integrability of this system and provide numerical tests of the suggested expression for the transition probability matrix.
Probability genotype imputation method and integrated weighted lasso for QTL identification.
Demetrashvili, Nino; Van den Heuvel, Edwin R; Wit, Ernst C
2013-12-30
Many QTL studies have two common features: (1) often there is missing marker information, (2) among many markers involved in the biological process only a few are causal. In statistics, the second issue falls under the headings "sparsity" and "causal inference". The goal of this work is to develop a two-step statistical methodology for QTL mapping for markers with binary genotypes. The first step introduces a novel imputation method for missing genotypes. Outcomes of the proposed imputation method are probabilities which serve as weights to the second step, namely in weighted lasso. The sparse phenotype inference is employed to select a set of predictive markers for the trait of interest. Simulation studies validate the proposed methodology under a wide range of realistic settings. Furthermore, the methodology outperforms alternative imputation and variable selection methods in such studies. The methodology was applied to an Arabidopsis experiment, containing 69 markers for 165 recombinant inbred lines of a F8 generation. The results confirm previously identified regions, however several new markers are also found. On the basis of the inferred ROC behavior these markers show good potential for being real, especially for the germination trait Gmax. Our imputation method shows higher accuracy in terms of sensitivity and specificity compared to alternative imputation method. Also, the proposed weighted lasso outperforms commonly practiced multiple regression as well as the traditional lasso and adaptive lasso with three weighting schemes. This means that under realistic missing data settings this methodology can be used for QTL identification.
Edwards, James P; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel
2018-04-01
We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).
NASA Astrophysics Data System (ADS)
Edwards, James P.; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel
2018-04-01
We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).
NASA Technical Reports Server (NTRS)
Sellers, Piers
2012-01-01
Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.
A large class of solvable multistate Landau–Zener models and quantum integrability
NASA Astrophysics Data System (ADS)
Chernyak, Vladimir Y.; Sinitsyn, Nikolai A.; Sun, Chen
2018-06-01
The concept of quantum integrability has been introduced recently for quantum systems with explicitly time-dependent Hamiltonians (Sinitsyn et al 2018 Phys. Rev. Lett. 120 190402). Within the multistate Landau–Zener (MLZ) theory, however, there has been a successful alternative approach to identify and solve complex time-dependent models (Sinitsyn and Chernyak 2017 J. Phys. A: Math. Theor. 50 255203). Here we compare both methods by applying them to a new class of exactly solvable MLZ models. This class contains systems with an arbitrary number of interacting states and shows quick growth with N number of exact adiabatic energy crossing points, which appear at different moments of time. At each N, transition probabilities in these systems can be found analytically and exactly but complexity and variety of solutions in this class also grow with N quickly. We illustrate how common features of solvable MLZ systems appear from quantum integrability and develop an approach to further classification of solvable MLZ problems.
NASA Astrophysics Data System (ADS)
Bevilacqua, Andrea; Neri, Augusto; Bisson, Marina; Esposti Ongaro, Tomaso; Flandoli, Franco; Isaia, Roberto; Rosi, Mauro; Vitale, Stefano
2017-09-01
This study presents a new method for producing long-term hazard maps for pyroclastic density currents (PDC) originating at Campi Flegrei caldera. Such method is based on a doubly stochastic approach and is able to combine the uncertainty assessments on the spatial location of the volcanic vent, the size of the flow and the expected time of such an event. The results are obtained by using a Monte Carlo approach and adopting a simplified invasion model based on the box model integral approximation. Temporal assessments are modelled through a Cox-type process including self-excitement effects, based on the eruptive record of the last 15 kyr. Mean and percentile maps of PDC invasion probability are produced, exploring their sensitivity to some sources of uncertainty and to the effects of the dependence between PDC scales and the caldera sector where they originated. Conditional maps representative of PDC originating inside limited zones of the caldera, or of PDC with a limited range of scales are also produced. Finally, the effect of assuming different time windows for the hazard estimates is explored, also including the potential occurrence of a sequence of multiple events. Assuming that the last eruption of Monte Nuovo (A.D. 1538) marked the beginning of a new epoch of activity similar to the previous ones, results of the statistical analysis indicate a mean probability of PDC invasion above 5% in the next 50 years on almost the entire caldera (with a probability peak of 25% in the central part of the caldera). In contrast, probability values reduce by a factor of about 3 if the entire eruptive record is considered over the last 15 kyr, i.e. including both eruptive epochs and quiescent periods.
Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A
2014-04-01
Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Boyun; Duguid, Andrew; Nygaard, Ronar
The objective of this project is to develop a computerized statistical model with the Integrated Neural-Genetic Algorithm (INGA) for predicting the probability of long-term leak of wells in CO 2 sequestration operations. This object has been accomplished by conducting research in three phases: 1) data mining of CO 2-explosed wells, 2) INGA computer model development, and 3) evaluation of the predictive performance of the computer model with data from field tests. Data mining was conducted for 510 wells in two CO 2 sequestration projects in the Texas Gulf Coast region. They are the Hasting West field and Oyster Bayou fieldmore » in the Southern Texas. Missing wellbore integrity data were estimated using an analytical and Finite Element Method (FEM) model. The INGA was first tested for performances of convergence and computing efficiency with the obtained data set of high dimension. It was concluded that the INGA can handle the gathered data set with good accuracy and reasonable computing time after a reduction of dimension with a grouping mechanism. A computerized statistical model with the INGA was then developed based on data pre-processing and grouping. Comprehensive training and testing of the model were carried out to ensure that the model is accurate and efficient enough for predicting the probability of long-term leak of wells in CO 2 sequestration operations. The Cranfield in the southern Mississippi was select as the test site. Observation wells CFU31F2 and CFU31F3 were used for pressure-testing, formation-logging, and cement-sampling. Tools run in the wells include Isolation Scanner, Slim Cement Mapping Tool (SCMT), Cased Hole Formation Dynamics Tester (CHDT), and Mechanical Sidewall Coring Tool (MSCT). Analyses of the obtained data indicate no leak of CO 2 cross the cap zone while it is evident that the well cement sheath was invaded by the CO 2 from the storage zone. This observation is consistent with the result predicted by the INGA model which indicates the well has a CO 2 leak-safe probability of 72%. This comparison implies that the developed INGA model is valid for future use in predicting well leak probability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donangelo, R.J.
An integral representation for the classical limit of the quantum mechanical S-matrix is developed and applied to heavy-ion Coulomb excitation and Coulomb-nuclear interference. The method combines the quantum principle of superposition with exact classical dynamics to describe the projectile-target system. A detailed consideration of the classical trajectories and of the dimensionless parameters that characterize the system is carried out. The results are compared, where possible, to exact quantum mechanical calculations and to conventional semiclassical calculations. It is found that in the case of backscattering the classical limit S-matrix method is able to almost exactly reproduce the quantum-mechanical S-matrix elements, andmore » therefore the transition probabilities, even for projectiles as light as protons. The results also suggest that this approach should be a better approximation for heavy-ion multiple Coulomb excitation than earlier semiclassical methods, due to a more accurate description of the classical orbits in the electromagnetic field of the target nucleus. Calculations using this method indicate that the rotational excitation probabilities in the Coulomb-nuclear interference region should be very sensitive to the details of the potential at the surface of the nucleus, suggesting that heavy-ion rotational excitation could constitute a sensitive probe of the nuclear potential in this region. The application to other problems as well as the present limits of applicability of the formalism are also discussed.« less
A Bayesian pick-the-winner design in a randomized phase II clinical trial.
Chen, Dung-Tsa; Huang, Po-Yu; Lin, Hui-Yi; Chiappori, Alberto A; Gabrilovich, Dmitry I; Haura, Eric B; Antonia, Scott J; Gray, Jhanelle E
2017-10-24
Many phase II clinical trials evaluate unique experimental drugs/combinations through multi-arm design to expedite the screening process (early termination of ineffective drugs) and to identify the most effective drug (pick the winner) to warrant a phase III trial. Various statistical approaches have been developed for the pick-the-winner design but have been criticized for lack of objective comparison among the drug agents. We developed a Bayesian pick-the-winner design by integrating a Bayesian posterior probability with Simon two-stage design in a randomized two-arm clinical trial. The Bayesian posterior probability, as the rule to pick the winner, is defined as probability of the response rate in one arm higher than in the other arm. The posterior probability aims to determine the winner when both arms pass the second stage of the Simon two-stage design. When both arms are competitive (i.e., both passing the second stage), the Bayesian posterior probability performs better to correctly identify the winner compared with the Fisher exact test in the simulation study. In comparison to a standard two-arm randomized design, the Bayesian pick-the-winner design has a higher power to determine a clear winner. In application to two studies, the approach is able to perform statistical comparison of two treatment arms and provides a winner probability (Bayesian posterior probability) to statistically justify the winning arm. We developed an integrated design that utilizes Bayesian posterior probability, Simon two-stage design, and randomization into a unique setting. It gives objective comparisons between the arms to determine the winner.
Measures of School Integration: Comparing Coleman's Index to Measures of Species Diversity.
ERIC Educational Resources Information Center
Mercil, Steven Bray; Williams, John Delane
This study used species diversity indices developed in ecology as a measure of socioethnic diversity, and compared them to Coleman's Index of Segregation. The twelve indices were Simpson's Concentration Index ("ell"), Simpson's Index of Diversity, Hurlbert's Probability of Interspecific Encounter (PIE), Simpson's Probability of…
REPRESENTATIONS OF WEAK AND STRONG INTEGRALS IN BANACH SPACES
Brooks, James K.
1969-01-01
We establish a representation of the Gelfand-Pettis (weak) integral in terms of unconditionally convergent series. Moreover, absolute convergence of the series is a necessary and sufficient condition in order that the weak integral coincide with the Bochner integral. Two applications of the representation are given. The first is a simplified proof of the countable additivity and absolute continuity of the indefinite weak integral. The second application is to probability theory; we characterize the conditional expectation of a weakly integrable function. PMID:16591755
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw
The Schrödinger–Langevin equation with linear dissipation is integrated by propagating an ensemble of Bohmian trajectories for the ground state of quantum systems. Substituting the wave function expressed in terms of the complex action into the Schrödinger–Langevin equation yields the complex quantum Hamilton–Jacobi equation with linear dissipation. We transform this equation into the arbitrary Lagrangian–Eulerian version with the grid velocity matching the flow velocity of the probability fluid. The resulting equation is simultaneously integrated with the trajectory guidance equation. Then, the computational method is applied to the harmonic oscillator, the double well potential, and the ground vibrational state of methyl iodide.more » The excellent agreement between the computational and the exact results for the ground state energies and wave functions shows that this study provides a synthetic trajectory approach to the ground state of quantum systems.« less
Boundary layer integral matrix procedure: Verification of models
NASA Technical Reports Server (NTRS)
Bonnett, W. S.; Evans, R. M.
1977-01-01
The three turbulent models currently available in the JANNAF version of the Aerotherm Boundary Layer Integral Matrix Procedure (BLIMP-J) code were studied. The BLIMP-J program is the standard prediction method for boundary layer effects in liquid rocket engine thrust chambers. Experimental data from flow fields with large edge-to-wall temperature ratios are compared to the predictions of the three turbulence models contained in BLIMP-J. In addition, test conditions necessary to generate additional data on a flat plate or in a nozzle are given. It is concluded that the Cebeci-Smith turbulence model be the recommended model for the prediction of boundary layer effects in liquid rocket engines. In addition, the effects of homogeneous chemical reaction kinetics were examined for a hydrogen/oxygen system. Results show that for most flows, kinetics are probably only significant for stoichiometric mixture ratios.
Dangerous "spin": the probability myth of evidence-based prescribing - a Merleau-Pontyian approach.
Morstyn, Ron
2011-08-01
The aim of this study was to examine logical positivist statistical probability statements used to support and justify "evidence-based" prescribing rules in psychiatry when viewed from the major philosophical theories of probability, and to propose "phenomenological probability" based on Maurice Merleau-Ponty's philosophy of "phenomenological positivism" as a better clinical and ethical basis for psychiatric prescribing. The logical positivist statistical probability statements which are currently used to support "evidence-based" prescribing rules in psychiatry have little clinical or ethical justification when subjected to critical analysis from any of the major theories of probability and represent dangerous "spin" because they necessarily exclude the individual , intersubjective and ambiguous meaning of mental illness. A concept of "phenomenological probability" founded on Merleau-Ponty's philosophy of "phenomenological positivism" overcomes the clinically destructive "objectivist" and "subjectivist" consequences of logical positivist statistical probability and allows psychopharmacological treatments to be appropriately integrated into psychiatric treatment.
Probabilistic Integrated Assessment of ``Dangerous'' Climate Change
NASA Astrophysics Data System (ADS)
Mastrandrea, Michael D.; Schneider, Stephen H.
2004-04-01
Climate policy decisions are being made despite layers of uncertainty. Such decisions directly influence the potential for ``dangerous anthropogenic interference with the climate system.'' We mapped a metric for this concept, based on Intergovernmental Panel on Climate Change assessment of climate impacts, onto probability distributions of future climate change produced from uncertainty in key parameters of the coupled social-natural system-climate sensitivity, climate damages, and discount rate. Analyses with a simple integrated assessment model found that, under midrange assumptions, endogenously calculated, optimal climate policy controls can reduce the probability of dangerous anthropogenic interference from ~45% under minimal controls to near zero.
Simulation of inverse Compton scattering and its implications on the scattered linewidth
NASA Astrophysics Data System (ADS)
Ranjan, N.; Terzić, B.; Krafft, G. A.; Petrillo, V.; Drebot, I.; Serafini, L.
2018-03-01
Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. In this paper, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model to describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016), 10.1103/PhysRevAccelBeams.19.121302], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.
Simulation of inverse Compton scattering and its implications on the scattered linewidth
Ranjan, N.; Terzić, B.; Krafft, G. A.; ...
2018-03-06
Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. Here in this article, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model tomore » describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016)], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.« less
VizieR Online Data Catalog: Proper motions in M 11 (Su+ 1998)
NASA Astrophysics Data System (ADS)
Su, C.-G.; Zhao, J.-L.; Tian, K.-P.
1997-07-01
Relative proper motions of 872 stars in the open cluster M 11 region are reduced using 10 plate pairs taken over time baselines of 16~70 years with the double astrograph telescope of Shanghai Observatory. The scale is 30"/mm. The plates were measured with the PDS machines in the Purple Mountain Observatory in Nanjing and the Institute of Technology and Communication in Luoyang, China. The average proper motion accuracy is about 1.1mas/yr with 85% of the data better than 1mas/yr. Membership probabilities of 785 stars within 25' centred on M 11 are determined based on their proper motions. The method used is suggested by Su et al. (1995AcApS..15..217S) with some improvements of Zhao & He (1990A&A...237...54Z), in which the space distribution and magnitude dependencies for cluster stars are taken into account. The results are significantly good. The total integrated membership probabilities for all these stars is 547 and the number of stars with probabilities higher than 0.7 is 541. It can be found after the membership determination that there exists mass segregation in M 11. Some comparisons and discussion are also given. (1 data file).
van Wieringen, Wessel N; van de Wiel, Mark A
2011-05-01
Realizing that genes often operate together, studies into the molecular biology of cancer shift focus from individual genes to pathways. In order to understand the regulatory mechanisms of a pathway, one must study its genes at all molecular levels. To facilitate such study at the genomic level, we developed exploratory factor analysis for the characterization of the variability of a pathway's copy number data. A latent variable model that describes the call probability data of a pathway is introduced and fitted with an EM algorithm. In two breast cancer data sets, it is shown that the first two latent variables of GO nodes, which inherit a clear interpretation from the call probabilities, are often related to the proportion of aberrations and a contrast of the probabilities of a loss and of a gain. Linking the latent variables to the node's gene expression data suggests that they capture the "global" effect of genomic aberrations on these transcript levels. In all, the proposed method provides an possibly insightful characterization of pathway copy number data, which may be fruitfully exploited to study the interaction between the pathway's DNA copy number aberrations and data from other molecular levels like gene expression.
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A
2013-02-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.
Li, Pu; Chen, Bing
2011-04-01
Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.
Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C
2018-04-01
A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.
Bayesian demography 250 years after Bayes
Bijak, Jakub; Bryant, John
2016-01-01
Bayesian statistics offers an alternative to classical (frequentist) statistics. It is distinguished by its use of probability distributions to describe uncertain quantities, which leads to elegant solutions to many difficult statistical problems. Although Bayesian demography, like Bayesian statistics more generally, is around 250 years old, only recently has it begun to flourish. The aim of this paper is to review the achievements of Bayesian demography, address some misconceptions, and make the case for wider use of Bayesian methods in population studies. We focus on three applications: demographic forecasts, limited data, and highly structured or complex models. The key advantages of Bayesian methods are the ability to integrate information from multiple sources and to describe uncertainty coherently. Bayesian methods also allow for including additional (prior) information next to the data sample. As such, Bayesian approaches are complementary to many traditional methods, which can be productively re-expressed in Bayesian terms. PMID:26902889
Nilsson, Håkan; Juslin, Peter; Winman, Anders
2016-01-01
Costello and Watts (2014) present a model assuming that people's knowledge of probabilities adheres to probability theory, but that their probability judgments are perturbed by a random noise in the retrieval from memory. Predictions for the relationships between probability judgments for constituent events and their disjunctions and conjunctions, as well as for sums of such judgments were derived from probability theory. Costello and Watts (2014) report behavioral data showing that subjective probability judgments accord with these predictions. Based on the finding that subjective probability judgments follow probability theory, Costello and Watts (2014) conclude that the results imply that people's probability judgments embody the rules of probability theory and thereby refute theories of heuristic processing. Here, we demonstrate the invalidity of this conclusion by showing that all of the tested predictions follow straightforwardly from an account assuming heuristic probability integration (Nilsson, Winman, Juslin, & Hansson, 2009). We end with a discussion of a number of previous findings that harmonize very poorly with the predictions by the model suggested by Costello and Watts (2014). (c) 2015 APA, all rights reserved).
Mikou, M; Ghosne, N; El Baydaoui, R; Zirari, Z; Kuntz, F
2015-05-01
Performance characteristics of the megavoltage photon dose measurements with EPR and table sugar were analyzed. An advantage of sugar as a dosimetric material is its tissue equivalency. The minimal detectable dose was found to be 1.5Gy for both the 6 and 18MV photons. The dose response curves are linear up to at least 20Gy. The energy dependence of the dose response in the megavoltage energy range is very weak and probably statistically insignificant. Reproducibility of measurements of various doses in this range performed with the peak-to-peak and double-integral methods is reported. The method can be used in real-time dosimetry in radiation therapy. Copyright © 2015 Elsevier Ltd. All rights reserved.
IC layout adjustment method and tool for improving dielectric reliability at interconnects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahng, Andrew B.; Chan, Tuck Boon
Method for adjusting a layout used in making an integrated circuit includes one or more interconnects in the layout that are susceptible to dielectric breakdown are selected. One or more selected interconnects are adjusted to increase via to wire spacing with respect to at least one via and one wire of the one or more selected interconnects. Preferably, the selecting analyzes signal patterns of interconnects, and estimates the stress ratio based on state probability of routed signal nets in the layout. An annotated layout is provided that describes distances by which one or more via or wire segment edges aremore » to be shifted. Adjustments can include thinning and shifting of wire segments, and rotation of vias.« less
On-chip immunomagnetic separation of bacteria by in-flow dynamic manipulation of paramagnetic beads
NASA Astrophysics Data System (ADS)
Ahmed, Shakil; Noh, Jong Wook; Hoyland, James; de Oliveira Hansen, Roana; Erdmann, Helmut; Rubahn, Horst-Günter
2016-11-01
Every year, millions of people all over the world fall ill due to the consumption of unsafe food, where consumption of contaminated and spoiled animal origin product is the main cause for diseases due to bacterial growth. This leads to an intense need for efficient methods for detection of food-related bacteria. In this work, we present a method for integration of immunomagnetic separation of bacteria into microfluidic technology by applying an alternating magnetic field, which manipulates the paramagnetic beads into a sinusoidal path across the whole microchannel, increasing the probability for bacteria capture. The optimum channel geometry, flow rate and alternating magnetic field frequency were investigated, resulting in a capture efficiency of 68 %.
Capel, P.D.; Larson, S.J.
1995-01-01
Minimizing the loss of target organic chemicals from environmental water samples between the time of sample collection and isolation is important to the integrity of an investigation. During this sample holding time, there is a potential for analyte loss through volatilization from the water to the headspace, sorption to the walls and cap of the sample bottle; and transformation through biotic and/or abiotic reactions. This paper presents a chemodynamic-based, generalized approach to estimate the most probable loss processes for individual target organic chemicals. The basic premise is that the investigator must know which loss process(es) are important for a particular analyte, based on its chemodynamic properties, when choosing the appropriate method(s) to prevent loss.
Elimination des constantes arbitraires dans la theorie relativiste des quanta [85
NASA Astrophysics Data System (ADS)
This article shows how the influence of the undetermined constants in the integral theory of collisions1)2)3)4) can be avoided. A rule is given by which the probability amplitudes (5[F]-matrix) may be calculated in terms of a given local action. The procedure of the integral method differs essentially from the differential method employed by Tomonaga6), Schwikger5), FÅÕímaí7) and Dyson8) in that the two sorts of diverging terms occuring in the formal solution of a Schroedinqer equation are avoided. These two divergencies are: 1) the well known «.self energy» divergencies which have been since corrected by methods of regularization (Rivikr1), Pattli and Villaks9)); 2) the more serious boundary divergencies (Stueckelberg4)) due to the sharp spatio-temporal limitation of the space-time region of evolution V in which the collisions occur. The convergent parts (anomalous g-factor of the electron and the Lamb-Rethekford shift) obtained by Schwinger are, in the present theory, the boundary independent amplitudes in fourth approximation. Üp to this approximation the rule eliminates the arbitrary constants from all conservative processes.
Queries over Unstructured Data: Probabilistic Methods to the Rescue
NASA Astrophysics Data System (ADS)
Sarawagi, Sunita
Unstructured data like emails, addresses, invoices, call transcripts, reviews, and press releases are now an integral part of any large enterprise. A challenge of modern business intelligence applications is analyzing and querying data seamlessly across structured and unstructured sources. This requires the development of automated techniques for extracting structured records from text sources and resolving entity mentions in data from various sources. The success of any automated method for extraction and integration depends on how effectively it unifies diverse clues in the unstructured source and in existing structured databases. We argue that statistical learning techniques like Conditional Random Fields (CRFs) provide a accurate, elegant and principled framework for tackling these tasks. Given the inherent noise in real-world sources, it is important to capture the uncertainty of the above operations via imprecise data models. CRFs provide a sound probability distribution over extractions but are not easy to represent and query in a relational framework. We present methods of approximating this distribution to query-friendly row and column uncertainty models. Finally, we present models for representing the uncertainty of de-duplication and algorithms for various Top-K count queries on imprecise duplicates.
Foundation Mathematics for the Physical Sciences
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2011-03-01
1. Arithmetic and geometry; 2. Preliminary algebra; 3. Differential calculus; 4. Integral calculus; 5. Complex numbers and hyperbolic functions; 6. Series and limits; 7. Partial differentiation; 8. Multiple integrals; 9. Vector algebra; 10. Matrices and vector spaces; 11. Vector calculus; 12. Line, surface and volume integrals; 13. Laplace transforms; 14. Ordinary differential equations; 15. Elementary probability; Appendices; Index.
Student Solution Manual for Foundation Mathematics for the Physical Sciences
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2011-03-01
1. Arithmetic and geometry; 2. Preliminary algebra; 3. Differential calculus; 4. Integral calculus; 5. Complex numbers and hyperbolic functions; 6. Series and limits; 7. Partial differentiation; 8. Multiple integrals; 9. Vector algebra; 10. Matrices and vector spaces; 11. Vector calculus; 12. Line, surface and volume integrals; 13. Laplace transforms; 14. Ordinary differential equations; 15. Elementary probability; Appendix.
NASA Astrophysics Data System (ADS)
Leow, Alex D.; Zhu, Siwei
2008-03-01
Diffusion weighted MR imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitizing gradients along a minimum of 6 directions, second-order tensors (represetnted by 3-by-3 positive definiite matrices) can be computed to model dominant diffusion processes. However, it has been shown that conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g. crossing fiber tracts. More recently, High Angular Resolution Diffusion Imaging (HARDI) seeks to address this issue by employing more than 6 gradient directions. To account for fiber crossing when analyzing HARDI data, several methodologies have been introduced. For example, q-ball imaging was proposed to approximate Orientation Diffusion Function (ODF). Similarly, the PAS method seeks to reslove the angular structure of displacement probability functions using the maximum entropy principle. Alternatively, deconvolution methods extract multiple fiber tracts by computing fiber orientations using a pre-specified single fiber response function. In this study, we introduce Tensor Distribution Function (TDF), a probability function defined on the space of symmetric and positive definite matrices. Using calculus of variations, we solve for the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, ODF can easily be computed by analytical integration of the resulting displacement probability function. Moreover, principle fiber directions can also be directly derived from the TDF.
He, Jieyue; Wang, Chunyan; Qiu, Kunpu; Zhong, Wei
2014-01-01
Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. The algorithm of probability graph isomorphism evaluation based on circuit simulation method excludes most of subgraphs which are not probability isomorphism and reduces the search space of the probability isomorphism subgraphs using the mismatch values in the node voltage set. It is an innovative way to find the frequent probability patterns, which can be efficiently applied to probability motif discovery problems in the further studies.
Less-Complex Method of Classifying MPSK
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2006-01-01
An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may "know" little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term "modulation classification" or simply "classification" signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum- likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of carrier phase, of a complicated sum of functions of the baseband sample values, the carrier phase, the carrier-signal and noise magnitudes, and M or M'. Then the likelihood ratio, defined as the ratio between the likelihood functions, is computed, leading to the choice of whichever hypothesis - M or M'- is more likely. In the alternative method, the integral in each likelihood function is approximated by a sum over values of the integrand sampled at a number, 1, of equally spaced values of carrier phase. Used in this way, 1 is a parameter that can be adjusted to trade computational complexity against the probability of misclassification. In the limit as 1 approaches infinity, one obtains the integral form of the likelihood function and thus recovers the ML classification. The present approximate method has been tested in comparison with the ML method by means of computational simulations. The results of the simulations have shown that the performance (as quantified by probability of misclassification) of the approximate method is nearly indistinguishable from that of the ML method (see figure).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Glenn Edward; Song, Xuehang; Ye, Ming
A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. Themore » spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.« less
Multivariate survivorship analysis using two cross-sectional samples.
Hill, M E
1999-11-01
As an alternative to survival analysis with longitudinal data, I introduce a method that can be applied when one observes the same cohort in two cross-sectional samples collected at different points in time. The method allows for the estimation of log-probability survivorship models that estimate the influence of multiple time-invariant factors on survival over a time interval separating two samples. This approach can be used whenever the survival process can be adequately conceptualized as an irreversible single-decrement process (e.g., mortality, the transition to first marriage among a cohort of never-married individuals). Using data from the Integrated Public Use Microdata Series (Ruggles and Sobek 1997), I illustrate the multivariate method through an investigation of the effects of race, parity, and educational attainment on the survival of older women in the United States.
Estimated Probability of a Cervical Spine Injury During an ISS Mission
NASA Technical Reports Server (NTRS)
Brooker, John E.; Weaver, Aaron S.; Myers, Jerry G.
2013-01-01
Introduction: The Integrated Medical Model (IMM) utilizes historical data, cohort data, and external simulations as input factors to provide estimates of crew health, resource utilization and mission outcomes. The Cervical Spine Injury Module (CSIM) is an external simulation designed to provide the IMM with parameter estimates for 1) a probability distribution function (PDF) of the incidence rate, 2) the mean incidence rate, and 3) the standard deviation associated with the mean resulting from injury/trauma of the neck. Methods: An injury mechanism based on an idealized low-velocity blunt impact to the superior posterior thorax of an ISS crewmember was used as the simulated mission environment. As a result of this impact, the cervical spine is inertially loaded from the mass of the head producing an extension-flexion motion deforming the soft tissues of the neck. A multibody biomechanical model was developed to estimate the kinematic and dynamic response of the head-neck system from a prescribed acceleration profile. Logistic regression was performed on a dataset containing AIS1 soft tissue neck injuries from rear-end automobile collisions with published Neck Injury Criterion values producing an injury transfer function (ITF). An injury event scenario (IES) was constructed such that crew 1 is moving through a primary or standard translation path transferring large volume equipment impacting stationary crew 2. The incidence rate for this IES was estimated from in-flight data and used to calculate the probability of occurrence. The uncertainty in the model input factors were estimated from representative datasets and expressed in terms of probability distributions. A Monte Carlo Method utilizing simple random sampling was employed to propagate both aleatory and epistemic uncertain factors. Scatterplots and partial correlation coefficients (PCC) were generated to determine input factor sensitivity. CSIM was developed in the SimMechanics/Simulink environment with a Monte Carlo wrapper (MATLAB) used to integrate the components of the module. Results: The probability of generating an AIS1 soft tissue neck injury from the extension/flexion motion induced by a low-velocity blunt impact to the superior posterior thorax was fitted with a lognormal PDF with mean 0.26409, standard deviation 0.11353, standard error of mean 0.00114, and 95% confidence interval [0.26186, 0.26631]. Combining the probability of an AIS1 injury with the probability of IES occurrence was fitted with a Johnson SI PDF with mean 0.02772, standard deviation 0.02012, standard error of mean 0.00020, and 95% confidence interval [0.02733, 0.02812]. The input factor sensitivity analysis in descending order was IES incidence rate, ITF regression coefficient 1, impactor initial velocity, ITF regression coefficient 2, and all others (equipment mass, crew 1 body mass, crew 2 body mass) insignificant. Verification and Validation (V&V): The IMM V&V, based upon NASA STD 7009, was implemented which included an assessment of the data sets used to build CSIM. The documentation maintained includes source code comments and a technical report. The software code and documentation is under Subversion configuration management. Kinematic validation was performed by comparing the biomechanical model output to established corridors.
NASA Astrophysics Data System (ADS)
Niemi, Sami-Matias; Kitching, Thomas D.; Cropper, Mark
2015-12-01
One of the most powerful techniques to study the dark sector of the Universe is weak gravitational lensing. In practice, to infer the reduced shear, weak lensing measures galaxy shapes, which are the consequence of both the intrinsic ellipticity of the sources and of the integrated gravitational lensing effect along the line of sight. Hence, a very large number of galaxies is required in order to average over their individual properties and to isolate the weak lensing cosmic shear signal. If this `shape noise' can be reduced, significant advances in the power of a weak lensing surveys can be expected. This paper describes a general method for extracting the probability distributions of parameters from catalogues of data using Voronoi cells, which has several applications, and has synergies with Bayesian hierarchical modelling approaches. This allows us to construct a probability distribution for the variance of the intrinsic ellipticity as a function of galaxy property using only photometric data, allowing a reduction of shape noise. As a proof of concept the method is applied to the CFHTLenS survey data. We use this approach to investigate trends of galaxy properties in the data and apply this to the case of weak lensing power spectra.
Statistical mechanics of neocortical interactions. Derivation of short-term-memory capacity
NASA Astrophysics Data System (ADS)
Ingber, Lester
1984-06-01
A theory developed by the author to describe macroscopic neocortical interactions demonstrates that empirical values of chemical and electrical parameters of synaptic interactions establish several minima of the path-integral Lagrangian as a function of excitatory and inhibitory columnar firings. The number of possible minima, their time scales of hysteresis and probable reverberations, and their nearest-neighbor columnar interactions are all consistent with well-established empirical rules of human short-term memory. Thus, aspects of conscious experience are derived from neuronal firing patterns, using modern methods of nonlinear nonequilibrium statistical mechanics to develop realistic explicit synaptic interactions.
Field induced transient current in one-dimensional nanostructure
NASA Astrophysics Data System (ADS)
Sako, Tokuei; Ishida, Hiroshi
2018-07-01
Field-induced transient current in one-dimensional nanostructures has been studied by a model of an electron confined in a 1D attractive Gaussian potential subjected both to electrodes at the terminals and to an ultrashort pulsed oscillatory electric field with the central frequency ω and the FWHM pulse width Γ. The time-propagation of the electron wave packet has been simulated by integrating the time-dependent Schrödinger equation directly relying on the second-order symplectic integrator method. The transient current has been calculated as the flux of the probability density of the escaping wave packet emitted from the downstream side of the confining potential. When a static bias-field E0 is suddenly applied, the resultant transient current shows an oscillatory decay behavior with time followed by a minimum structure before converging to a nearly constant value. The ω-dependence of the integrated transient current induced by the pulsed electric field has shown an asymmetric resonance line-shape for large Γ while it shows a fringe pattern on the spectral line profile for small Γ. These observations have been rationalized on the basis of the energy-level structure and lifetime of the quasibound states in the bias-field modified confining potential obtained by the complex-scaling Fourier grid Hamiltonian method.
A Comparison of Vibration and Oil Debris Gear Damage Detection Methods Applied to Pitting Damage
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.
2000-01-01
Helicopter Health Usage Monitoring Systems (HUMS) must provide reliable, real-time performance monitoring of helicopter operating parameters to prevent damage of flight critical components. Helicopter transmission diagnostics are an important part of a helicopter HUMS. In order to improve the reliability of transmission diagnostics, many researchers propose combining two technologies, vibration and oil monitoring, using data fusion and intelligent systems. Some benefits of combining multiple sensors to make decisions include improved detection capabilities and increased probability the event is detected. However, if the sensors are inaccurate, or the features extracted from the sensors are poor predictors of transmission health, integration of these sensors will decrease the accuracy of damage prediction. For this reason, one must verify the individual integrity of vibration and oil analysis methods prior to integrating the two technologies. This research focuses on comparing the capability of two vibration algorithms, FM4 and NA4, and a commercially available on-line oil debris monitor to detect pitting damage on spur gears in the NASA Glenn Research Center Spur Gear Fatigue Test Rig. Results from this research indicate that the rate of change of debris mass measured by the oil debris monitor is comparable to the vibration algorithms in detecting gear pitting damage.
Stefănescu, Lucrina; Robu, Brînduşa Mihaela; Ozunu, Alexandru
2013-11-01
The environmental impact assessment of mining sites represents nowadays a large interest topic in Romania. Historical pollution in the Rosia Montana mining area of Romania caused extensive damage to environmental media. This paper has two goals: to investigate the environmental pollution induced by mining activities in the Rosia Montana area and to quantify the environmental impacts and associated risks by means of an integrated approach. Thus, a new method was developed and applied for quantifying the impact of mining activities, taking account of the quality of environmental media in the mining area, and used as case study in the present paper. The associated risks are a function of the environmental impacts and the probability of their occurrence. The results show that the environmental impacts and quantified risks, based on quality indicators to characterize the environmental quality, are of a higher order, and thus measures for pollution remediation and control need to be considered in the investigated area. The conclusion drawn is that an integrated approach for the assessment of environmental impact and associated risks is a valuable and more objective method, and is an important tool that can be applied in the decision-making process for national authorities in the prioritization of emergency action.
Quantum return probability of a system of N non-interacting lattice fermions
NASA Astrophysics Data System (ADS)
Krapivsky, P. L.; Luck, J. M.; Mallick, K.
2018-02-01
We consider N non-interacting fermions performing continuous-time quantum walks on a one-dimensional lattice. The system is launched from a most compact configuration where the fermions occupy neighboring sites. We calculate exactly the quantum return probability (sometimes referred to as the Loschmidt echo) of observing the very same compact state at a later time t. Remarkably, this probability depends on the parity of the fermion number—it decays as a power of time for even N, while for odd N it exhibits periodic oscillations modulated by a decaying power law. The exponent also slightly depends on the parity of N, and is roughly twice smaller than what it would be in the continuum limit. We also consider the same problem, and obtain similar results, in the presence of an impenetrable wall at the origin constraining the particles to remain on the positive half-line. We derive closed-form expressions for the amplitudes of the power-law decay of the return probability in all cases. The key point in the derivation is the use of Mehta integrals, which are limiting cases of the Selberg integral.
Optimizing Probability of Detection Point Estimate Demonstration
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.
Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won
2011-01-01
To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.
Integrating modeling and surveys for more effective assessments
A false dichotomy currently exists in monitoring that pits sample surveys based on probability designs against targeted monitoring of hand-picked sites. We maintain that judicious use of both, when designed to be integrated, produces assessments of greater value than either inde...
Derivation of an eigenvalue probability density function relating to the Poincaré disk
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Krishnapur, Manjunath
2009-09-01
A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.
Instruction in information structuring improves Bayesian judgment in intelligence analysts.
Mandel, David R
2015-01-01
An experiment was conducted to test the effectiveness of brief instruction in information structuring (i.e., representing and integrating information) for improving the coherence of probability judgments and binary choices among intelligence analysts. Forty-three analysts were presented with comparable sets of Bayesian judgment problems before and immediately after instruction. After instruction, analysts' probability judgments were more coherent (i.e., more additive and compliant with Bayes theorem). Instruction also improved the coherence of binary choices regarding category membership: after instruction, subjects were more likely to invariably choose the category to which they assigned the higher probability of a target's membership. The research provides a rare example of evidence-based validation of effectiveness in instruction to improve the statistical assessment skills of intelligence analysts. Such instruction could also be used to improve the assessment quality of other types of experts who are required to integrate statistical information or make probabilistic assessments.
Emotion and decision-making: affect-driven belief systems in anxiety and depression.
Paulus, Martin P; Yu, Angela J
2012-09-01
Emotion processing and decision-making are integral aspects of daily life. However, our understanding of the interaction between these constructs is limited. In this review, we summarize theoretical approaches that link emotion and decision-making, and focus on research with anxious or depressed individuals to show how emotions can interfere with decision-making. We integrate the emotional framework based on valence and arousal with a Bayesian approach to decision-making in terms of probability and value processing. We discuss how studies of individuals with emotional dysfunctions provide evidence that alterations of decision-making can be viewed in terms of altered probability and value computation. We argue that the probabilistic representation of belief states in the context of partially observable Markov decision processes provides a useful approach to examine alterations in probability and value representation in individuals with anxiety and depression, and outline the broader implications of this approach. Copyright © 2012. Published by Elsevier Ltd.
Emotion and decision-making: affect-driven belief systems in anxiety and depression
Paulus, Martin P.; Yu, Angela J.
2012-01-01
Emotion processing and decision-making are integral aspects of daily life. However, our understanding of the interaction between these constructs is limited. In this review, we summarize theoretical approaches to the link between emotion and decision-making, and focus on research with anxious or depressed individuals that reveals how emotions can interfere with decision-making. We integrate the emotional framework based on valence and arousal with a Bayesian approach to decision-making in terms of probability and value processing. We then discuss how studies of individuals with emotional dysfunctions provide evidence that alterations of decision-making can be viewed in terms of altered probability and value computation. We argue that the probabilistic representation of belief states in the context of partially observable Markov decision processes provides a useful approach to examine alterations in probability and value representation in individuals with anxiety and depression and outline the broader implications of this approach. PMID:22898207
NASA Astrophysics Data System (ADS)
Cajiao Vélez, F.; Kamiński, J. Z.; Krajewska, K.
2018-04-01
High-energy photoionization driven by short and circularly-polarized laser pulses is studied in the framework of the relativistic strong-field approximation. The saddle-point analysis of the integrals defining the probability amplitude is used to determine the general properties of the probability distributions. Additionally, an approximate solution to the saddle-point equation is derived. This leads to the concept of the three-dimensional spiral of life in momentum space, around which the ionization probability distribution is maximum. We demonstrate that such spiral is also obtained from a classical treatment.
An Equality in Stochastic Processes
1971-06-21
Neyman have dis- cussed extensively this model in their study of the probabilities of relapse, re- covery, and death for cancer patients [6] (see also Du...negative constant and vp a positive constant. Explicit formulas of the probabilities Pa )(to, t) have been derived in terms of vP.. and vP. (see [2...transitions from Sp occurring during (T, t); the probability of this sequence of events is (52) P’aja’(to 7-)[ vap d7-]P,’’ (TX t)- Integrating (52) from T
NASA Astrophysics Data System (ADS)
Zhuang, X. W.; Li, Y. P.; Nie, S.; Fan, Y. R.; Huang, G. H.
2018-01-01
An integrated simulation-optimization (ISO) approach is developed for assessing climate change impacts on water resources. In the ISO, uncertainties presented as both interval numbers and probability distributions can be reflected. Moreover, ISO permits in-depth analyses of various policy scenarios that are associated with different levels of economic consequences when the promised water-allocation targets are violated. A snowmelt-precipitation-driven watershed (Kaidu watershed) in northwest China is selected as the study case for demonstrating the applicability of the proposed method. Results of meteorological projections disclose that the incremental trend of temperature (e.g., minimum and maximum values) and precipitation exist. Results also reveal that (i) the system uncertainties would significantly affect water resources allocation pattern (including target and shortage); (ii) water shortage would be enhanced from 2016 to 2070; and (iii) the more the inflow amount decreases, the higher estimated water shortage rates are. The ISO method is useful for evaluating climate change impacts within a watershed system with complicated uncertainties and helping identify appropriate water resources management strategies hedging against drought.
Colonius, Hans; Diederich, Adele
2011-07-01
The concept of a "time window of integration" holds that information from different sensory modalities must not be perceived too far apart in time in order to be integrated into a multisensory perceptual event. Empirical estimates of window width differ widely, however, ranging from 40 to 600 ms depending on context and experimental paradigm. Searching for theoretical derivation of window width, Colonius and Diederich (Front Integr Neurosci 2010) developed a decision-theoretic framework using a decision rule that is based on the prior probability of a common source, the likelihood of temporal disparities between the unimodal signals, and the payoff for making right or wrong decisions. Here, this framework is extended to the focused attention task where subjects are asked to respond to signals from a target modality only. Evoking the framework of the time-window-of-integration (TWIN) model, an explicit expression for optimal window width is obtained. The approach is probed on two published focused attention studies. The first is a saccadic reaction time study assessing the efficiency with which multisensory integration varies as a function of aging. Although the window widths for young and older adults differ by nearly 200 ms, presumably due to their different peripheral processing speeds, neither of them deviates significantly from the optimal values. In the second study, head saccadic reactions times to a perfectly aligned audiovisual stimulus pair had been shown to depend on the prior probability of spatial alignment. Intriguingly, they reflected the magnitude of the time-window widths predicted by our decision-theoretic framework, i.e., a larger time window is associated with a higher prior probability.
Ghavami, Behnam; Raji, Mohsen; Pedram, Hossein
2011-08-26
Carbon nanotube field-effect transistors (CNFETs) show great promise as building blocks of future integrated circuits. However, synthesizing single-walled carbon nanotubes (CNTs) with accurate chirality and exact positioning control has been widely acknowledged as an exceedingly complex task. Indeed, density and chirality variations in CNT growth can compromise the reliability of CNFET-based circuits. In this paper, we present a novel statistical compact model to estimate the failure probability of CNFETs to provide some material and process guidelines for the design of CNFETs in gigascale integrated circuits. We use measured CNT spacing distributions within the framework of detailed failure analysis to demonstrate that both the CNT density and the ratio of metallic to semiconducting CNTs play dominant roles in defining the failure probability of CNFETs. Besides, it is argued that the large-scale integration of these devices within an integrated circuit will be feasible only if a specific range of CNT density with an acceptable ratio of semiconducting to metallic CNTs can be adjusted in a typical synthesis process.
Beta-decay rate and beta-delayed neutron emission probability of improved gross theory
NASA Astrophysics Data System (ADS)
Koura, Hiroyuki
2014-09-01
A theoretical study has been carried out on beta-decay rate and beta-delayed neutron emission probability. The gross theory of the beta decay is based on an idea of the sum rule of the beta-decay strength function, and has succeeded in describing beta-decay half-lives of nuclei overall nuclear mass region. The gross theory includes not only the allowed transition as the Fermi and the Gamow-Teller, but also the first-forbidden transition. In this work, some improvements are introduced as the nuclear shell correction on nuclear level densities and the nuclear deformation for nuclear strength functions, those effects were not included in the original gross theory. The shell energy and the nuclear deformation for unmeasured nuclei are adopted from the KTUY nuclear mass formula, which is based on the spherical-basis method. Considering the properties of the integrated Fermi function, we can roughly categorized energy region of excited-state of a daughter nucleus into three regions: a highly-excited energy region, which fully affect a delayed neutron probability, a middle energy region, which is estimated to contribute the decay heat, and a region neighboring the ground-state, which determines the beta-decay rate. Some results will be given in the presentation. A theoretical study has been carried out on beta-decay rate and beta-delayed neutron emission probability. The gross theory of the beta decay is based on an idea of the sum rule of the beta-decay strength function, and has succeeded in describing beta-decay half-lives of nuclei overall nuclear mass region. The gross theory includes not only the allowed transition as the Fermi and the Gamow-Teller, but also the first-forbidden transition. In this work, some improvements are introduced as the nuclear shell correction on nuclear level densities and the nuclear deformation for nuclear strength functions, those effects were not included in the original gross theory. The shell energy and the nuclear deformation for unmeasured nuclei are adopted from the KTUY nuclear mass formula, which is based on the spherical-basis method. Considering the properties of the integrated Fermi function, we can roughly categorized energy region of excited-state of a daughter nucleus into three regions: a highly-excited energy region, which fully affect a delayed neutron probability, a middle energy region, which is estimated to contribute the decay heat, and a region neighboring the ground-state, which determines the beta-decay rate. Some results will be given in the presentation. This work is a result of Comprehensive study of delayed-neutron yields for accurate evaluation of kinetics of high-burn up reactors entrusted to Tokyo Institute of Technology by the Ministry of Education, Culture, Sports, Science and Technology of Japan.
Zhang, Xiaoshuai; Xue, Fuzhong; Liu, Hong; Zhu, Dianwen; Peng, Bin; Wiemels, Joseph L; Yang, Xiaowei
2014-12-10
Genome-wide Association Studies (GWAS) are typically designed to identify phenotype-associated single nucleotide polymorphisms (SNPs) individually using univariate analysis methods. Though providing valuable insights into genetic risks of common diseases, the genetic variants identified by GWAS generally account for only a small proportion of the total heritability for complex diseases. To solve this "missing heritability" problem, we implemented a strategy called integrative Bayesian Variable Selection (iBVS), which is based on a hierarchical model that incorporates an informative prior by considering the gene interrelationship as a network. It was applied here to both simulated and real data sets. Simulation studies indicated that the iBVS method was advantageous in its performance with highest AUC in both variable selection and outcome prediction, when compared to Stepwise and LASSO based strategies. In an analysis of a leprosy case-control study, iBVS selected 94 SNPs as predictors, while LASSO selected 100 SNPs. The Stepwise regression yielded a more parsimonious model with only 3 SNPs. The prediction results demonstrated that the iBVS method had comparable performance with that of LASSO, but better than Stepwise strategies. The proposed iBVS strategy is a novel and valid method for Genome-wide Association Studies, with the additional advantage in that it produces more interpretable posterior probabilities for each variable unlike LASSO and other penalized regression methods.
Why does Japan use the probability method to set design flood?
NASA Astrophysics Data System (ADS)
Nakamura, S.; Oki, T.
2015-12-01
Design flood is hypothetical flood to make flood prevention plan. In Japan, a probability method based on precipitation data is used to define the scale of design flood: Tone River, the biggest river in Japan, is 1 in 200 years, Shinano River is 1 in 150 years, and so on. It is one of important socio-hydrological issue how to set reasonable and acceptable design flood in a changing world. The method to set design flood vary among countries. Although the probability method is also used in Netherland, but the base data is water level or discharge data and the probability is 1 in 1250 years (in fresh water section). On the other side, USA and China apply the maximum flood method which set the design flood based on the historical or probable maximum flood. This cases can leads a question: "what is the reason why the method vary among countries?" or "why does Japan use the probability method?" The purpose of this study is to clarify the historical process which the probability method was developed in Japan based on the literature. In the late 19the century, the concept of "discharge" and modern river engineering were imported by Dutch engineers, and modern flood prevention plans were developed in Japan. In these plans, the design floods were set based on the historical maximum method. Although the historical maximum method had been used until World War 2, however, the method was changed to the probability method after the war because of limitations of historical maximum method under the specific socio-economic situations: (1) the budget limitation due to the war and the GHQ occupation, (2) the historical floods: Makurazaki typhoon in 1945, Kathleen typhoon in 1947, Ione typhoon in 1948, and so on, attacked Japan and broke the record of historical maximum discharge in main rivers and the flood disasters made the flood prevention projects difficult to complete. Then, Japanese hydrologists imported the hydrological probability statistics from the West to take account of socio-economic situation in design flood, and they applied to Japanese rivers in 1958. The probability method was applied Japan to adapt the specific socio-economic and natural situation during the confusion after the war.
NASA Astrophysics Data System (ADS)
Galushina, T. Yu.; Titarenko, E. Yu
2014-12-01
The purpose of this work is the investigation of probabilistic orbital evolution of near-Earth asteroids (NEA) moving in the vicinity of resonances with Mercury. In order to identify such objects the equations of all NEA motion have been integrated on the time interval (1000, 3000 years). The initial data has been taken from the E. Bowell catalog on February 2014. The motion equations have been integrated numerically by Everhart method. The resonance characteristics are critical argument that defines the connection longitude of the asteroid and the planet and its time derivative, called resonance "band". The study has identified 15 asteroids moving in the vicinity of different resonances with Mercury. Six of them (52381 1993 HA, 172034 2001 WR1, 2008 VB1, 2009 KT4, 2013 CQ35, 2013 TH) move in the vicinity of the resonance 1/6, five of them (142561 2002 TX68, 159608 2002 AC2, 241370 2008 LW8, 2006 UR216, 2009 XB2) move in the vicinity of the resonance 1/9 and one by one asteroid moves in the vicinity of resonances 1/3, 1/7, 1/8 and 2/7 (2006 SE6, 2002 CV46, 2013 CN35 and 2006 VY2 respectively). The orbits of all identified asteroids have been improved by least square method using the available optical observations and probabilistic orbital evolution has been investigated. Improvement have been carried out at the time of the best conditionality in accounting perturbations from the major planets, Pluto, Moon, Ceres, Pallas and Vesta, the relativistic effects from the Sun and the Solar oblateness. The estimation of the nonlinearity factor has showed that for all the considered NEA it does not exceed the critical value of 0.1, which makes it possible to use the linear method for constructing the initial probability domain. The domain has been built in the form of an ellipsoid in six-dimensional phase space of coordinates and velocity components on the base of the full covariance matrix, the center of ellipsoid is the nominal orbit obtained by improving. The 10 000 clones distributed according to the normal law has been chosen in the initial probability domain. The nonlinear method by numerical integration of the differential equations of each clone motion has been used for study of probabilistic orbital evolution. The force model has corresponded to the model used in the improvement. The time interval has been limited by ephemeris DE406 and accuracy of integration and has been amounted for different objects from two to six thousand years. As a result of the orbit improvement from the available optical positional observations it has been turned out that the orbits of NEA 2006 SE6, 2009 KT4, 2013 CQ35, 2013 TH, 2002 CV46, 2013 CN35 and 2006 VY2 are poorly defined, that does not allow to conclude about their resonance capture. The remaining objects can be divided into two classes. Asteroids 172034 2001 WR1, 2008 VB1, 159608 2002 AC2 and 2006 UR216 move in the vicinity of the resonance over the entire interval of the study. Probability domains of NEA 52381 1993 HA, 142561 2002 TX68, 241370 2008 LW8 и 2009 XB2 are increase significantly under the influence of close encounters, and part of clones are out of resonance. It should be noted that for all the considered objects the critical argument varies around the moving center of libration or circulates that suggests instability resonance.
NASA Astrophysics Data System (ADS)
Straus, D. M.
2007-12-01
The probability distribution (pdf) of errors is followed in identical twin studies using the COLA T63 AGCM, integrated with observed SST for 15 recent winters. 30 integrations per winter (for 15 winters) are available with initial errors that are extremely small. The evolution of the pdf is tested for multi-modality, and the results interpreted in terms of clusters / regimes found in: (a) the set of 15x30 integrations mentioned, and (b) a larger ensemble of 55x15 integrations made with the same GCM using the same SSTs. The mapping of pdf evolution and clusters is also carried out for each winter separately, using the clusters found in the 55-member ensemble for the same winter alone. This technique yields information on the change in regimes caused by different boundary forcing (Straus and Molteni, 2004; Straus, Corti and Molteni, 2006). Analysis of the growing errors in terms of baroclinic and barotropic components allows for interpretation of the corresponding instabilities.
Vadillo, Miguel A; Ortega-Castro, Nerea; Barberia, Itxaso; Baker, A G
2014-01-01
Many theories of causal learning and causal induction differ in their assumptions about how people combine the causal impact of several causes presented in compound. Some theories propose that when several causes are present, their joint causal impact is equal to the linear sum of the individual impact of each cause. However, some recent theories propose that the causal impact of several causes needs to be combined by means of a noisy-OR integration rule. In other words, the probability of the effect given several causes would be equal to the sum of the probability of the effect given each cause in isolation minus the overlap between those probabilities. In the present series of experiments, participants were given information about the causal impact of several causes and then they were asked what compounds of those causes they would prefer to use if they wanted to produce the effect. The results of these experiments suggest that participants actually use a variety of strategies, including not only the linear and the noisy-OR integration rules, but also averaging the impact of several causes.
NASA Astrophysics Data System (ADS)
Popov, V. D.; Khamidullina, N. M.
2006-10-01
In developing radio-electronic devices (RED) of spacecraft operating in the fields of ionizing radiation in space, one of the most important problems is the correct estimation of their radiation tolerance. The “weakest link” in the element base of onboard microelectronic devices under radiation effect is the integrated microcircuits (IMC), especially of large scale (LSI) and very large scale (VLSI) degree of integration. The main characteristic of IMC, which is taken into account when making decisions on using some particular type of IMC in the onboard RED, is the probability of non-failure operation (NFO) at the end of the spacecraft’s lifetime. It should be noted that, until now, the NFO has been calculated only from the reliability characteristics, disregarding the radiation effect. This paper presents the so-called “reliability” approach to determination of radiation tolerance of IMC, which allows one to estimate the probability of non-failure operation of various types of IMC with due account of radiation-stimulated dose failures. The described technique is applied to RED onboard the Spektr-R spacecraft to be launched in 2007.
NASA Technical Reports Server (NTRS)
Kerstman, Eric; Walton, Marlei; Minard, Charles; Saile, Lynn; Myers, Jerry; Butler, Doug; Lyengar, Sriram; Fitts, Mary; Johnson-Throop, Kathy
2009-01-01
The Integrated Medical Model (IMM) is a decision support tool used by medical system planners and designers as they prepare for exploration planning activities of the Constellation program (CxP). IMM provides an evidence-based approach to help optimize the allocation of in-flight medical resources for a specified level of risk within spacecraft operational constraints. Eighty medical conditions and associated resources are represented in IMM. Nine conditions are due to Space Adaptation Syndrome. The IMM helps answer fundamental medical mission planning questions such as What medical conditions can be expected? What type and quantity of medical resources are most likely to be used?", and "What is the probability of crew death or evacuation due to medical events?" For a specified mission and crew profile, the IMM effectively characterizes the sequence of events that could potentially occur should a medical condition happen. The mathematical relationships among mission and crew attributes, medical conditions and incidence data, in-flight medical resources, potential clinical and crew health end states are established to generate end state probabilities. A Monte Carlo computational method is used to determine the probable outcomes and requires up to 25,000 mission trials to reach convergence. For each mission trial, the pharmaceuticals and supplies required to diagnose and treat prevalent medical conditions are tracked and decremented. The uncertainty of patient response to treatment is bounded via a best-case, worst-case, untreated case algorithm. A Crew Health Index (CHI) metric, developed to account for functional impairment due to a medical condition, provides a quantified measure of risk and enables risk comparisons across mission scenarios. The use of historical in-flight medical data, terrestrial surrogate data as appropriate, and space medicine subject matter expertise has enabled the development of a probabilistic, stochastic decision support tool capable of optimizing in-flight medical systems based on crew and mission parameters. This presentation will illustrate how to apply quantitative risk assessment methods to optimize the mass and volume of space-based medical systems for a space flight mission given the level of crew health and mission risk.
Logic, probability, and human reasoning.
Johnson-Laird, P N; Khemlani, Sangeet S; Goodwin, Geoffrey P
2015-04-01
This review addresses the long-standing puzzle of how logic and probability fit together in human reasoning. Many cognitive scientists argue that conventional logic cannot underlie deductions, because it never requires valid conclusions to be withdrawn - not even if they are false; it treats conditional assertions implausibly; and it yields many vapid, although valid, conclusions. A new paradigm of probability logic allows conclusions to be withdrawn and treats conditionals more plausibly, although it does not address the problem of vapidity. The theory of mental models solves all of these problems. It explains how people reason about probabilities and postulates that the machinery for reasoning is itself probabilistic. Recent investigations accordingly suggest a way to integrate probability and deduction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Changes in Quality of Health Care Delivery after Vertical Integration
Carlin, Caroline S; Dowd, Bryan; Feldman, Roger
2015-01-01
Objectives To fill an empirical gap in the literature by examining changes in quality of care measures occurring when multispecialty clinic systems were acquired by hospital-owned, vertically integrated health care delivery systems in the Twin Cities area. Data Sources/Study Setting Administrative data for health plan enrollees attributed to treatment and control clinic systems, merged with U.S. Census data. Study Design We compared changes in quality measures for health plan enrollees in the acquired clinics to enrollees in nine control groups using a differences-in-differences model. Our dataset spans 2 years prior to and 4 years after the acquisitions. We estimated probit models with errors clustered within enrollees. Data Collection/Extraction Methods Data were assembled by the health plan’s informatics team. Principal Findings Vertical integration is associated with increased rates of colorectal and cervical cancer screening and more appropriate emergency department use. The probability of ambulatory care–sensitive admissions increased when the acquisition caused disruption in admitting patterns. Conclusions Moving a clinic system into a vertically integrated delivery system resulted in limited increases in quality of care indicators. Caution is warranted when the acquisition causes disruption in referral patterns. PMID:25529312
Dynamical Evolution of Meteoroid Streams, Developments Over the Last 30 Years
NASA Technical Reports Server (NTRS)
Williams, I. P.
2011-01-01
As soon as reliable methods for observationally determining the heliocentric orbits of meteoroids and hence the mean orbit of a meteoroid stream in the 1950s and 60s, astronomers strived to investigate the evolution of the orbit under the effects of gravitational perturbations from the planets. At first, the limitations in the capabilities of computers, both in terms of speed and memory, placed severe restrictions on what was possible to do. As a consequence, secular perturbation methods, where the perturbations are averaged over one orbit became the norm. The most popular of these is the Halphen- Goryachev method which was used extensively until the early 1980s. The main disadvantage of these methods lies in the fact that close encounter can be missed, however they remain useful for performing very long-term integrations. Direct integration methods determine the effects of the perturbing forces at many points on an orbit. This give a better picture of the orbital evolution of an individual meteoroid, but many meteoroids have to be integrated in order to obtain a realistic picture of the evolution of a meteoroid stream. The notion of generating a family of hypothetical meteoroids to represent a stream and directly integrate the motion of each was probably first used by Williams Murray & Hughes (1979), to investigate the Quadrantids. Because of computing limitations, only 10 test meteoroids were used. Only two years later, Hughes et. al. (1981) had increased the number of particles 20-fold to 200 while after a further year, Fox Williams and Hughes used 500 000 test meteoroids to model the Geminid stream. With such a number of meteoroids it was possible for the first time to produce a realistic cross-section of the stream on the ecliptic. From that point on there has been a continued increase in the number of meteoroids, the length of time over which integration is carried out and the frequency with which results can be plotted so that it is now possible to produce moving images of the stream. As a consequence, over recent years, emphasis has moved to considering stream formation and the role fragmentation plays in this.
A discussion on the origin of quantum probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holik, Federico, E-mail: olentiev2@gmail.com; Departamento de Matemática - Ciclo Básico Común, Universidad de Buenos Aires - Pabellón III, Ciudad Universitaria, Buenos Aires; Sáenz, Manuel
We study the origin of quantum probabilities as arising from non-Boolean propositional-operational structures. We apply the method developed by Cox to non distributive lattices and develop an alternative formulation of non-Kolmogorovian probability measures for quantum mechanics. By generalizing the method presented in previous works, we outline a general framework for the deduction of probabilities in general propositional structures represented by lattices (including the non-distributive case). -- Highlights: •Several recent works use a derivation similar to that of R.T. Cox to obtain quantum probabilities. •We apply Cox’s method to the lattice of subspaces of the Hilbert space. •We obtain a derivationmore » of quantum probabilities which includes mixed states. •The method presented in this work is susceptible to generalization. •It includes quantum mechanics and classical mechanics as particular cases.« less
LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang
2015-03-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.
LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188
Rebollar, Eria A; Antwis, Rachael E; Becker, Matthew H; Belden, Lisa K; Bletz, Molly C; Brucker, Robert M; Harrison, Xavier A; Hughey, Myra C; Kueneman, Jordan G; Loudon, Andrew H; McKenzie, Valerie; Medina, Daniel; Minbiole, Kevin P C; Rollins-Smith, Louise A; Walke, Jenifer B; Weiss, Sophie; Woodhams, Douglas C; Harris, Reid N
2016-01-01
Emerging infectious diseases in wildlife are responsible for massive population declines. In amphibians, chytridiomycosis caused by Batrachochytrium dendrobatidis, Bd, has severely affected many amphibian populations and species around the world. One promising management strategy is probiotic bioaugmentation of antifungal bacteria on amphibian skin. In vivo experimental trials using bioaugmentation strategies have had mixed results, and therefore a more informed strategy is needed to select successful probiotic candidates. Metagenomic, transcriptomic, and metabolomic methods, colloquially called "omics," are approaches that can better inform probiotic selection and optimize selection protocols. The integration of multiple omic data using bioinformatic and statistical tools and in silico models that link bacterial community structure with bacterial defensive function can allow the identification of species involved in pathogen inhibition. We recommend using 16S rRNA gene amplicon sequencing and methods such as indicator species analysis, the Kolmogorov-Smirnov Measure, and co-occurrence networks to identify bacteria that are associated with pathogen resistance in field surveys and experimental trials. In addition to 16S amplicon sequencing, we recommend approaches that give insight into symbiont function such as shotgun metagenomics, metatranscriptomics, or metabolomics to maximize the probability of finding effective probiotic candidates, which can then be isolated in culture and tested in persistence and clinical trials. An effective mitigation strategy to ameliorate chytridiomycosis and other emerging infectious diseases is necessary; the advancement of omic methods and the integration of multiple omic data provide a promising avenue toward conservation of imperiled species.
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-22
... adversely affect plant safety, and would have no adverse effect on the probability of any accident. For the accidents that involve damage or melting of the fuel in the reactor core, fuel rod integrity has been shown to be unaffected by extended burnup under consideration; therefore, the probability of an accident...
Vygotsky circle as a personal network of scholars: restoring connections between people and ideas.
Yasnitsky, Anton
2011-12-01
The name of Lev Vygotsky (1896-1934) is well-known among contemporary psychologists and educators. The cult of Vygotsky, also known as "Vygotsky boom", is probably conducive to continuous reinterpretation and wide dissemination of his ideas, but hardly beneficial for their understanding as an integrative theory of human cultural and biosocial development. Two problems are particularly notable. These are, first, numerous gaps and age-old biases and misconceptions in the historiography of Soviet psychology, and, second, the tendency to overly focus on the figure of Vygotsky to the neglect of the scientific activities of a number of other protagonists of the history of cultural-historical psychology. This study addresses these two problems and reconstructs the history and group dynamics within the dense network of Vygotsky's collaborators and associates, and overviews their research, which is instrumental in understanding Vygotsky's integrative theory in its entirety as a complex of interdependent ideas, methods, and practices.
Bidargaddi, Niranjan P; Chetty, Madhu; Kamruzzaman, Joarder
2008-06-01
Profile hidden Markov models (HMMs) based on classical HMMs have been widely applied for protein sequence identification. The formulation of the forward and backward variables in profile HMMs is made under statistical independence assumption of the probability theory. We propose a fuzzy profile HMM to overcome the limitations of that assumption and to achieve an improved alignment for protein sequences belonging to a given family. The proposed model fuzzifies the forward and backward variables by incorporating Sugeno fuzzy measures and Choquet integrals, thus further extends the generalized HMM. Based on the fuzzified forward and backward variables, we propose a fuzzy Baum-Welch parameter estimation algorithm for profiles. The strong correlations and the sequence preference involved in the protein structures make this fuzzy architecture based model as a suitable candidate for building profiles of a given family, since the fuzzy set can handle uncertainties better than classical methods.
Heck, Daniel W; Hilbig, Benjamin E; Moshagen, Morten
2017-08-01
Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions. Copyright © 2017 Elsevier Inc. All rights reserved.
Probability-based collaborative filtering model for predicting gene-disease associations.
Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan
2017-12-28
Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene-disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our model. Firstly, on the basis of a typical latent factorization model, we propose model I with an average heterogeneous regularization. Secondly, we develop modified model II with personal heterogeneous regularization to enhance the accuracy of aforementioned models. In this model, vector space similarity or Pearson correlation coefficient metrics and data on related species are also used. We compared the results of PCFM with the results of four state-of-arts approaches. The results show that PCFM performs better than other advanced approaches. PCFM model can be leveraged for predictions of disease genes, especially for new human genes or diseases with no known relationships.
Mourning dove population trend estimates from Call-Count and North American Breeding Bird Surveys
Sauer, J.R.; Dolton, D.D.; Droege, S.
1994-01-01
The mourning dove (Zenaida macroura) Callcount Survey and the North American Breeding Bird Survey provide information on population trends of mourning doves throughout the continental United States. Because surveys are an integral part of the development of hunting regulations, a need exists to determine which survey provides precise information. We estimated population trends from 1966 to 1988 by state and dove management unit, and assessed the relative efficiency of each survey. Estimates of population trend differ (P lt 0.05) between surveys in 11 of 48 states; 9 of 11 states with divergent results occur in the Eastern Management Unit. Differences were probably a consequence of smaller sample sizes in the Callcount Survey. The Breeding Bird Survey generally provided trend estimates with smaller variances than did the Callcount Survey. Although the Callcount Survey probably provides more withinroute accuracy because of survey methods and timing, the Breeding Bird Survey has a larger sample size of survey routes and greater consistency of coverage in the Eastern Unit.
Optimizing Medical Kits for Spaceflight
NASA Technical Reports Server (NTRS)
Keenan, A. B,; Foy, Millennia; Myers, G.
2014-01-01
The Integrated Medical Model (IMM) is a probabilistic model that estimates medical event occurrences and mission outcomes for different mission profiles. IMM simulation outcomes describing the impact of medical events on the mission may be used to optimize the allocation of resources in medical kits. Efficient allocation of medical resources, subject to certain mass and volume constraints, is crucial to ensuring the best outcomes of in-flight medical events. We implement a new approach to this medical kit optimization problem. METHODS We frame medical kit optimization as a modified knapsack problem and implement an algorithm utilizing a dynamic programming technique. Using this algorithm, optimized medical kits were generated for 3 different mission scenarios with the goal of minimizing the probability of evacuation and maximizing the Crew Health Index (CHI) for each mission subject to mass and volume constraints. Simulation outcomes using these kits were also compared to outcomes using kits optimized..RESULTS The optimized medical kits generated by the algorithm described here resulted in predicted mission outcomes more closely approached the unlimited-resource scenario for Crew Health Index (CHI) than the implementation in under all optimization priorities. Furthermore, the approach described here improves upon in reducing evacuation when the optimization priority is minimizing the probability of evacuation. CONCLUSIONS This algorithm provides an efficient, effective means to objectively allocate medical resources for spaceflight missions using the Integrated Medical Model.
Singular solution of the Feller diffusion equation via a spectral decomposition.
Gan, Xinjun; Waxman, David
2015-01-01
Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.
Singular solution of the Feller diffusion equation via a spectral decomposition
NASA Astrophysics Data System (ADS)
Gan, Xinjun; Waxman, David
2015-01-01
Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.
Calibrating random forests for probability estimation.
Dankowski, Theresa; Ziegler, Andreas
2016-09-30
Probabilities can be consistently estimated using random forests. It is, however, unclear how random forests should be updated to make predictions for other centers or at different time points. In this work, we present two approaches for updating random forests for probability estimation. The first method has been proposed by Elkan and may be used for updating any machine learning approach yielding consistent probabilities, so-called probability machines. The second approach is a new strategy specifically developed for random forests. Using the terminal nodes, which represent conditional probabilities, the random forest is first translated to logistic regression models. These are, in turn, used for re-calibration. The two updating strategies were compared in a simulation study and are illustrated with data from the German Stroke Study Collaboration. In most simulation scenarios, both methods led to similar improvements. In the simulation scenario in which the stricter assumptions of Elkan's method were not met, the logistic regression-based re-calibration approach for random forests outperformed Elkan's method. It also performed better on the stroke data than Elkan's method. The strength of Elkan's method is its general applicability to any probability machine. However, if the strict assumptions underlying this approach are not met, the logistic regression-based approach is preferable for updating random forests for probability estimation. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Probabilistic structural analysis methods for improving Space Shuttle engine reliability
NASA Technical Reports Server (NTRS)
Boyce, L.
1989-01-01
Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.
Analytical Assessment for Transient Stability Under Stochastic Continuous Disturbances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ju, Ping; Li, Hongyu; Gan, Chun
Here, with the growing integration of renewable power generation, plug-in electric vehicles, and other sources of uncertainty, increasing stochastic continuous disturbances are brought to power systems. The impact of stochastic continuous disturbances on power system transient stability attracts significant attention. To address this problem, this paper proposes an analytical assessment method for transient stability of multi-machine power systems under stochastic continuous disturbances. In the proposed method, a probability measure of transient stability is presented and analytically solved by stochastic averaging. Compared with the conventional method (Monte Carlo simulation), the proposed method is many orders of magnitude faster, which makes itmore » very attractive in practice when many plans for transient stability must be compared or when transient stability must be analyzed quickly. Also, it is found that the evolution of system energy over time is almost a simple diffusion process by the proposed method, which explains the impact mechanism of stochastic continuous disturbances on transient stability in theory.« less
NASA Astrophysics Data System (ADS)
Frits, Andrew P.
In the current Navy environment of undersea weapons development, the engineering aspect of design is decoupled from the development of the tactics with which the weapon is employed. Tactics are developed by intelligence experts, warfighters, and wargamers, while torpedo design is handled by engineers and contractors. This dissertation examines methods by which the conceptual design process of undersea weapon systems, including both torpedo systems and mine counter-measure systems, can be improved. It is shown that by simultaneously designing the torpedo and the tactics with which undersea weapons are used, a more effective overall weapon system can be created. In addition to integrating torpedo tactics with design, the thesis also looks at design methods to account for uncertainty. The uncertainty is attributable to multiple sources, including: lack of detailed analysis tools early in the design process, incomplete knowledge of the operational environments, and uncertainty in the performance of potential technologies. A robust design process is introduced to account for this uncertainty in the analysis and optimization of torpedo systems through the combination of Monte Carlo simulation with response surface methodology and metamodeling techniques. Additionally, various other methods that are appropriate to uncertainty analysis are discussed and analyzed. The thesis also advances a new approach towards examining robustness and risk: the treatment of probability of success (POS) as an independent variable. Examining the cost and performance tradeoffs between high and low probability of success designs, the decision-maker can make better informed decisions as to what designs are most promising and determine the optimal balance of risk, cost, and performance. Finally, the thesis examines the use of non-dimensionalization of parameters for torpedo design. The thesis shows that the use of non-dimensional torpedo parameters leads to increased knowledge about the scaleability of torpedo systems and increased performance of Designs of Experiments.
The Mid-Atlantic Integrated Assessment (MAIA-Estuaries) evaluated ecological conditions in US Mid-Atlantic estuaries during the summers of 1997 and 1998. Over 800 probability-based stations were monitored in four main estuarine systems?Chesapeake Bay, the Delaware Estuary, Maryla...
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
Dealing with non-unique and non-monotonic response in particle sizing instruments
NASA Astrophysics Data System (ADS)
Rosenberg, Phil
2017-04-01
A number of instruments used as de-facto standards for measuring particle size distributions are actually incapable of uniquely determining the size of an individual particle. This is due to non-unique or non-monotonic response functions. Optical particle counters have non monotonic response due to oscillations in the Mie response curves, especially for large aerosol and small cloud droplets. Scanning mobility particle sizers respond identically to two particles where the ratio of particle size to particle charge is approximately the same. Images of two differently sized cloud or precipitation particles taken by an optical array probe can have similar dimensions or shadowed area depending upon where they are in the imaging plane. A number of methods exist to deal with these issues, including assuming that positive and negative errors cancel, smoothing response curves, integrating regions in measurement space before conversion to size space and matrix inversion. Matrix inversion (also called kernel inversion) has the advantage that it determines the size distribution which best matches the observations, given specific information about the instrument (a matrix which specifies the probability that a particle of a given size will be measured in a given instrument size bin). In this way it maximises use of the information in the measurements. However this technique can be confused by poor counting statistics which can cause erroneous results and negative concentrations. Also an effective method for propagating uncertainties is yet to be published or routinely implemented. Her we present a new alternative which overcomes these issues. We use Bayesian methods to determine the probability that a given size distribution is correct given a set of instrument data and then we use Markov Chain Monte Carlo methods to sample this many dimensional probability distribution function to determine the expectation and (co)variances - hence providing a best guess and an uncertainty for the size distribution which includes contributions from the non-unique response curve, counting statistics and can propagate calibration uncertainties.
NASA Astrophysics Data System (ADS)
Piro, Salvatore; Papale, Enrico; Kucukdemirci, Melda; Zamuner, Daniela
2017-04-01
Non-destructive ground surface geophysical prospecting methods are frequently used for the investigation of archaeological sites, where a detailed physical and geometrical reconstructions of hidden volumes is required prior to any excavation work. All methods measure the variations of single physical parameters, therefore if these are used singularly, they could not permit a complete location and characterization of anomalous bodies. The probability of a successful result rapidly increases if a multhimethodological approach is adopted, according to the logic of objective complementarity of information and of global convergence toward a high quality multiparametric imaging of the buried structures. The representation of the static configuration of the bodies in the subsoil and of the space-time evolution of the interaction processes between targets and hosting materials have to be actually considered fundamental elements of primary knowledge in archaeological prospecting. The main effort in geophysical prospecting for archaeology is therefore the integration of different, absolutely non-invasive techniques, especially if managed in view of a ultra-high resolution three-dimensional (3D) tomographic representation mode. Following the above outlined approach, we have integrated geophysical methods which measure the variations of potential field (gradiometric methods) with active methods which measure the variations of physical properties due to the body's geometry and volume (GPR and ERT). In this work, the results obtained during the surveys of three archaeological sites, employing Ground Penetrating Radar (GPR), Electrical Resistivity Tomography (ERT) and Fluxgate Differential Magnetic (FDM) to obtain precise and detailed maps of subsurface bodies, are presented and discussed. The first site, situated in a suburban area between Itri and Fondi, in the Aurunci Natural Regional Park (Central Italy), is characterized by the presence of remains of past human activity dating from the third century B.C. The second site is always in suburban area and is part of the ancient acropolis Etruscan town of Cerveteri (central Italy). The third site is part of Aizanoi archaeological park (Cavdarhisar, Kutahya, Turkey). To have a better understanding of the subsurface, we performed a different integrated approaches of these data, which consists in fusing the data from all the employed methods, to have a complete visualization of the investigated area. For the processing we have used the following techniques: graphical integration (overlay and RGB colour composite), discrete data analysis (binary data analysis and cluster analysis) and continuous data analysis (data sum, product, max, min and PCA). Ernenwein, E.G. 2009. Integration of multidimensional archaeogeophysical data using supervised and unsupervised classification. Near surface geophysics. Vol 7: 147-158. DOI:10.3997/1873-0604.2009004 Kucukdemirci,M., Piro.S.,Baydemir,N.,Ozer.,E. Zamuner.,D. 2015. Mathematical and Statistical Integration approach on archaeological prospection data,case studies from Aizanoi-Turkey. 43rd Computer Applications and Quantitative Methods in Archaeology, Siena. Kvamme,K.,2007. Integrating Multiple Geophysical Datasets, Remote Sensing in archaeology, Springer,Boston. Piro,S.,Mauriello.,P. and Cammarano.,F.2000. Quantitative Integration of Geophysical methods for Archaeological Prospection. Archaeological prospection 7(4): 203-213. Piro S., Papale E., Zamuner D., 2016. Different integrated geophysical approaches to investigate archaeological sites in urban and suburban area. Geophysical Research Abstracts Vol. 18, EGU2016.
Determining the authenticity of athlete urine in doping control by DNA analysis.
Devesse, Laurence; Syndercombe Court, Denise; Cowan, David
2015-10-01
The integrity of urine samples collected from athletes for doping control is essential. The authenticity of samples may be contested, leading to the need for a robust sample identification method. DNA typing using short tandem repeats (STR) can be used for identification purposes, but its application to cellular DNA in urine has so far been limited. Here, a reliable and accurate method is reported for the successful identification of urine samples, using reduced final extraction volumes and the STR multiplex kit, Promega® PowerPlex ESI 17, with capillary electrophoretic characterisation of the alleles. Full DNA profiles were obtained for all samples (n = 20) stored for less than 2 days at 4 °C. The effect of different storage conditions on yield of cellular DNA and probability of obtaining a full profile were also investigated. Storage for 21 days at 4 °C resulted in allelic drop-out in some samples, but the random match probabilities obtained demonstrate the high power of discrimination achieved through targeting a large number of STRs. The best solution for long-term storage was centrifugation and removal of supernatant prior to freezing at -20 °C. The method is robust enough for incorporation into current anti-doping protocols, and was successfully applied to 44 athlete samples for anti-doping testing with 100% concordant typing. Copyright © 2015 John Wiley & Sons, Ltd.
Faith, Daniel P
2008-12-01
New species conservation strategies, including the EDGE of Existence (EDGE) program, have expanded threatened species assessments by integrating information about species' phylogenetic distinctiveness. Distinctiveness has been measured through simple scores that assign shared credit among species for evolutionary heritage represented by the deeper phylogenetic branches. A species with a high score combined with a high extinction probability receives high priority for conservation efforts. Simple hypothetical scenarios for phylogenetic trees and extinction probabilities demonstrate how such scoring approaches can provide inefficient priorities for conservation. An existing probabilistic framework derived from the phylogenetic diversity measure (PD) properly captures the idea of shared responsibility for the persistence of evolutionary history. It avoids static scores, takes into account the status of close relatives through their extinction probabilities, and allows for the necessary updating of priorities in light of changes in species threat status. A hypothetical phylogenetic tree illustrates how changes in extinction probabilities of one or more species translate into changes in expected PD. The probabilistic PD framework provided a range of strategies that moved beyond expected PD to better consider worst-case PD losses. In another example, risk aversion gave higher priority to a conservation program that provided a smaller, but less risky, gain in expected PD. The EDGE program could continue to promote a list of top species conservation priorities through application of probabilistic PD and simple estimates of current extinction probability. The list might be a dynamic one, with all the priority scores updated as extinction probabilities change. Results of recent studies suggest that estimation of extinction probabilities derived from the red list criteria linked to changes in species range sizes may provide estimated probabilities for many different species. Probabilistic PD provides a framework for single-species assessment that is well-integrated with a broader measurement of impacts on PD owing to climate change and other factors.
The propagation of Lamb waves in multilayered plates: phase-velocity measurement
NASA Astrophysics Data System (ADS)
Grondel, Sébastien; Assaad, Jamal; Delebarre, Christophe; Blanquet, Pierrick; Moulin, Emmanuel
1999-05-01
Owing to the dispersive nature and complexity of the Lamb waves generated in a composite plate, the measurement of the phase velocities by using classical methods is complicated. This paper describes a measurement method based upon the spectrum-analysis technique, which allows one to overcome these problems. The technique consists of using the fast Fourier transform to compute the spatial power-density spectrum. Additionally, weighted functions are used to increase the probability of detecting the various propagation modes. Experimental Lamb-wave dispersion curves of multilayered plates are successfully compared with the analytical ones. This technique is expected to be a useful way to design composite parts integrating ultrasonic transducers in the field of health monitoring. Indeed, Lamb waves and particularly their velocities are very sensitive to defects.
Manktelow, Bradley N.; Seaton, Sarah E.
2012-01-01
Background Emphasis is increasingly being placed on the monitoring and comparison of clinical outcomes between healthcare providers. Funnel plots have become a standard graphical methodology to identify outliers and comprise plotting an outcome summary statistic from each provider against a specified ‘target’ together with upper and lower control limits. With discrete probability distributions it is not possible to specify the exact probability that an observation from an ‘in-control’ provider will fall outside the control limits. However, general probability characteristics can be set and specified using interpolation methods. Guidelines recommend that providers falling outside such control limits should be investigated, potentially with significant consequences, so it is important that the properties of the limits are understood. Methods Control limits for funnel plots for the Standardised Mortality Ratio (SMR) based on the Poisson distribution were calculated using three proposed interpolation methods and the probability calculated of an ‘in-control’ provider falling outside of the limits. Examples using published data were shown to demonstrate the potential differences in the identification of outliers. Results The first interpolation method ensured that the probability of an observation of an ‘in control’ provider falling outside either limit was always less than a specified nominal probability (p). The second method resulted in such an observation falling outside either limit with a probability that could be either greater or less than p, depending on the expected number of events. The third method led to a probability that was always greater than, or equal to, p. Conclusion The use of different interpolation methods can lead to differences in the identification of outliers. This is particularly important when the expected number of events is small. We recommend that users of these methods be aware of the differences, and specify which interpolation method is to be used prior to any analysis. PMID:23029202
A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics
NASA Astrophysics Data System (ADS)
McDermott, Randall; Weinschenk, Craig
2013-11-01
A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.
Automated Monitoring with a BSP Fault-Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L.; Herzog, James P.
2003-01-01
The figure schematically illustrates a method and procedure for automated monitoring of an asset, as well as a hardware- and-software system that implements the method and procedure. As used here, asset could signify an industrial process, power plant, medical instrument, aircraft, or any of a variety of other systems that generate electronic signals (e.g., sensor outputs). In automated monitoring, the signals are digitized and then processed in order to detect faults and otherwise monitor operational status and integrity of the monitored asset. The major distinguishing feature of the present method is that the fault-detection function is implemented by use of a Bayesian sequential probability (BSP) technique. This technique is superior to other techniques for automated monitoring because it affords sensitivity, not only to disturbances in the mean values, but also to very subtle changes in the statistical characteristics (variance, skewness, and bias) of the monitored signals.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Trofimov, Vladislav V.; Tikhomirov, Vasily V.
2015-08-01
Principal limitations of the standard THz-TDS method for the detection and identification are demonstrated under real conditions (at long distance of about 3.5 m and at a high relative humidity more than 50%) using neutral substances thick paper bag, paper napkins and chocolate. We show also that the THz-TDS method detects spectral features of dangerous substances even if the THz signals were measured in laboratory conditions (at distance 30-40 cm from the receiver and at a low relative humidity less than 2%); silicon-based semiconductors were used as the samples. However, the integral correlation criteria, based on SDA method, allows us to detect the absence of dangerous substances in the neutral substances. The discussed algorithm shows high probability of the substance identification and a reliability of realization in practice, especially for security applications and non-destructive testing.
Hammerslough, C R
1992-01-01
An integrated approach to estimate the total number of pregnancies that begin in a population during one calendar year and the probability of spontaneous abortion is described. This includes an indirect estimate of the number of pregnancies that result in spontaneous abortions. The method simultaneously takes into account the proportion of induced abortions that are censored by spontaneous abortions and vice versa in order to estimate the true annual number of spontaneous and induced abortions for a population. It also estimates the proportion of pregnancies that women intended to allow to continue to a live birth. The proposed indirect approach derives adjustment factors to make indirect estimates by combining vital statistics information on gestational age at induced abortion (from the 12 States that report to the National Center for Health Statistics) with a life table of spontaneous abortion probabilities. The adjustment factors are applied to data on induced abortions from the Alan Guttmacher Institute Abortion Provider Survey and data on births from U.S. vital statistics. For the United States in 1980 the probability of a spontaneous abortion is 19 percent, given the presence of induced abortion. Once the effects of spontaneous abortion are discounted, women in 1980 intended to allow 73 percent of their pregnancies to proceed to a live birth. One medical benefit to a population practicing induced abortion is that induced abortions avert some spontaneous abortions, leading to a lower mean gestational duration at the time of spontaneous abortion. PMID:1594736
Allem, Jon-Patrick; Soto, Daniel W.; Baezconde-Garbanati, Lourdes; Unger, Jennifer B.
2015-01-01
Introduction Emerging adults who experienced stressful childhoods may engage in substance use as a maladaptive coping strategy. Given the collectivistic values Hispanics encounter growing up, adverse childhood experiences may play a prominent role in substance use decisions as these events violate the assumptions of group oriented cultural paradigms. Alternatively, adverse childhood events might not increase the risk of substance use because strong family ties could mitigate the potential maladaptive behaviors associated with these adverse experiences. This study examined whether adverse childhood experiences were associated with substance use among Hispanic emerging adults. Method Participants (n=1420, mean age=22, 41% male) completed surveys indicating whether they experienced any of 8 specific adverse experiences within their first 18 years of life, and past-month cigarette use, marijuana use, hard drug use, and binge drinking. Logistic regression models examined the associations between adverse childhood experiences and each category of substance use, controlling for age, gender, and depressive symptoms. Results The number of adverse childhood experiences was significantly associated with each category of substance use. A difference in the number of adverse childhood experiences, from 0 to 8, was associated with a 22% higher probability of cigarette smoking, a 24% higher probability of binge drinking, a 31% higher probability of marijuana use, and a 12% higher probability of hard drug use respectively. Conclusions These findings should be integrated into prevention/intervention programs in hopes of quelling the duration and severity of substance use behaviors among Hispanic emerging adults. PMID:26160522
Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.
Salis, Howard; Kaznessis, Yiannis
2005-02-01
The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.
Local region power spectrum-based unfocused ship detection method in synthetic aperture radar images
NASA Astrophysics Data System (ADS)
Wei, Xiangfei; Wang, Xiaoqing; Chong, Jinsong
2018-01-01
Ships on synthetic aperture radar (SAR) images will be severely defocused and their energy will disperse into numerous resolution cells under long SAR integration time. Therefore, the image intensity of ships is weak and sometimes even overwhelmed by sea clutter on SAR image. Consequently, it is hard to detect the ships from SAR intensity images. A ship detection method based on local region power spectrum of SAR complex image is proposed. Although the energies of the ships are dispersed on SAR intensity images, their spectral energies are rather concentrated or will cause the power spectra of local areas of SAR images to deviate from that of sea surface background. Therefore, the key idea of the proposed method is to detect ships via the power spectra distortion of local areas of SAR images. The local region power spectrum of a moving target on SAR image is analyzed and the way to obtain the detection threshold through the probability density function (pdf) of the power spectrum is illustrated. Numerical P- and L-band airborne SAR ocean data are utilized and the detection results are also illustrated. Results show that the proposed method can well detect the unfocused ships, with a detection rate of 93.6% and a false-alarm rate of 8.6%. Moreover, by comparing with some other algorithms, it indicates that the proposed method performs better under long SAR integration time. Finally, the applicability of the proposed method and the way of parameters selection are also discussed.
Effects of Spatial and Selective Attention on Basic Multisensory Integration
ERIC Educational Resources Information Center
Gondan, Matthias; Blurton, Steven P.; Hughes, Flavia; Greenlee, Mark W.
2011-01-01
When participants respond to auditory and visual stimuli, responses to audiovisual stimuli are substantially faster than to unimodal stimuli (redundant signals effect, RSE). In such tasks, the RSE is usually higher than probability summation predicts, suggestive of specific integration mechanisms underlying the RSE. We investigated the role of…
NASA Astrophysics Data System (ADS)
Barón-Aznar, C.; Moreno-Jiménez, S.; Celis, M. A.; Lárraga-Gutiérrez, J. M.; Ballesteros-Zebadúa, P.
2008-08-01
Integrated dose is the total energy delivered in a radiotherapy target. This physical parameter could be a predictor for complications such as brain edema and radionecrosis after stereotactic radiotherapy treatments for brain tumors. Integrated Dose depends on the tissue density and volume. Using CT patients images from the National Institute of Neurology and Neurosurgery and BrainScansoftware, this work presents the mean density of 21 multiform glioblastomas, comparative results for normal tissue and estimated integrated dose for each case. The relationship between integrated dose and the probability of complications is discussed.
2010-09-01
nationale, 2010 DRDC Valcartier CR 2010-237 i Abstract …….. A probability of hit ( PHit ) methodology has been developed to characterize the...CFB (Canadian Forces Base). Résumé …..... Une méthodologie de probabilité d’impact ( PHit ) a été développée pour caractériser la performance globale...the crew commander and gunner from their respective crew stations inside the vehicle. A probability of hit ( PHit ) methodology has been developed to
Probability Elicitation Under Severe Time Pressure: A Rank-Based Method.
Jaspersen, Johannes G; Montibeller, Gilberto
2015-07-01
Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio-scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low-probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real-world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats. © 2015 Society for Risk Analysis.
NASA Technical Reports Server (NTRS)
Simon, M.; Mileant, A.
1986-01-01
The steady-state behavior of a particular type of digital phase-locked loop (DPLL) with an integrate-and-dump circuit following the phase detector is characterized in terms of the probability density function (pdf) of the phase error in the loop. Although the loop is entirely digital from an implementation standpoint, it operates at two extremely different sampling rates. In particular, the combination of a phase detector and an integrate-and-dump circuit operates at a very high rate whereas the loop update rate is very slow by comparison. Because of this dichotomy, the loop can be analyzed by hybrid analog/digital (s/z domain) techniques. The loop is modeled in such a general fashion that previous analyses of the Real-Time Combiner (RTC), Subcarrier Demodulator Assembly (SDA), and Symbol Synchronization Assembly (SSA) fall out as special cases.
Integrated system for gathering, processing, and reporting data relating to site contamination
Long, D.D.; Goldberg, M.S.; Baker, L.A.
1997-11-11
An integrated screening system comprises an intrusive sampling subsystem, a field mobile laboratory subsystem, a computer assisted design/geographical information subsystem, and a telecommunication linkup subsystem, all integrated to provide synergistically improved data relating to the extent of site soil/groundwater contamination. According to the present invention, data samples related to the soil, groundwater or other contamination of the subsurface material are gathered and analyzed to measure contaminants. Based on the location of origin of the samples in three-dimensional space, the analyzed data are transmitted to a location display. The data from analyzing samples and the data from the locating the origin are managed to project the next probable sample location. The next probable sample location is then forwarded for use as a guide in the placement of ensuing sample location, whereby the number of samples needed to accurately characterize the site is minimized. 10 figs.
Integrated system for gathering, processing, and reporting data relating to site contamination
Long, Delmar D.; Goldberg, Mitchell S.; Baker, Lorie A.
1997-01-01
An integrated screening system comprises an intrusive sampling subsystem, a field mobile laboratory subsystem, a computer assisted design/geographical information subsystem, and a telecommunication linkup subsystem, all integrated to provide synergistically improved data relating to the extent of site soil/groundwater contamination. According to the present invention, data samples related to the soil, groundwater or other contamination of the subsurface material are gathered and analyzed to measure contaminants. Based on the location of origin of the samples in three-dimensional space, the analyzed data are transmitted to a location display. The data from analyzing samples and the data from the locating the origin are managed to project the next probable sample location. The next probable sample location is then forwarded for use as a guide in the placement of ensuing sample location, whereby the number of samples needed to accurately characterize the site is minimized.
Integrated Data Analysis for Fusion: A Bayesian Tutorial for Fusion Diagnosticians
NASA Astrophysics Data System (ADS)
Dinklage, Andreas; Dreier, Heiko; Fischer, Rainer; Gori, Silvio; Preuss, Roland; Toussaint, Udo von
2008-03-01
Integrated Data Analysis (IDA) offers a unified way of combining information relevant to fusion experiments. Thereby, IDA meets with typical issues arising in fusion data analysis. In IDA, all information is consistently formulated as probability density functions quantifying uncertainties in the analysis within the Bayesian probability theory. For a single diagnostic, IDA allows the identification of faulty measurements and improvements in the setup. For a set of diagnostics, IDA gives joint error distributions allowing the comparison and integration of different diagnostics results. Validation of physics models can be performed by model comparison techniques. Typical data analysis applications benefit from IDA capabilities of nonlinear error propagation, the inclusion of systematic effects and the comparison of different physics models. Applications range from outlier detection, background discrimination, model assessment and design of diagnostics. In order to cope with next step fusion device requirements, appropriate techniques are explored for fast analysis applications.
NASA Technical Reports Server (NTRS)
Howard, R. A.; North, D. W.; Pezier, J. P.
1975-01-01
A new methodology is proposed for integrating planetary quarantine objectives into space exploration planning. This methodology is designed to remedy the major weaknesses inherent in the current formulation of planetary quarantine requirements. Application of the methodology is illustrated by a tutorial analysis of a proposed Jupiter Orbiter mission. The proposed methodology reformulates planetary quarantine planning as a sequential decision problem. Rather than concentrating on a nominal plan, all decision alternatives and possible consequences are laid out in a decision tree. Probabilities and values are associated with the outcomes, including the outcome of contamination. The process of allocating probabilities, which could not be made perfectly unambiguous and systematic, is replaced by decomposition and optimization techniques based on principles of dynamic programming. Thus, the new methodology provides logical integration of all available information and allows selection of the best strategy consistent with quarantine and other space exploration goals.
Quantum stochastic walks on networks for decision-making.
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-03-31
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce's response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process' degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making.
Quantum stochastic walks on networks for decision-making
NASA Astrophysics Data System (ADS)
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-03-01
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce’s response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process’ degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making.
Quantum stochastic walks on networks for decision-making
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-01-01
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce’s response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process’ degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making. PMID:27030372
Xu, Wenwu; Zhang, Peiyu
2013-02-21
A time-dependent quantum wave packet method is used to investigate the dynamics of the He + HeH(+)(X(1)Σ(+)) reaction based on a new potential energy surface [Liang et al., J. Chem. Phys.2012, 136, 094307]. The coupled channel (CC) and centrifugal-sudden (CS) reaction probabilities as well as the total integral cross sections are calculated. A comparison of the results with and without Coriolis coupling revealed that the number of K states N(K) (K is the projection of the total angular momentum J on the body-fixed z axis) significantly influences the reaction threshold. The effective potential energy profiles of each N(K) for the He + HeH(+) reaction in a collinear geometry indicate that the barrier height gradually decreased with increased N(K). The calculated time evolution of CC and CS probability density distribution over the collision energy of 0.27-0.36 eV at total angular momentum J = 50 clearly suggests a lower reaction threshold of CC probabilities. The CC cross sections are larger than the CS results within the entire energy range, demonstrating that the Coriolis coupling effect can effectively promote the He + HeH(+) reaction.
Bayesian multiple-source localization in an uncertain ocean environment.
Dosso, Stan E; Wilmut, Michael J
2011-06-01
This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America
Collision probability at low altitudes resulting from elliptical orbits
NASA Technical Reports Server (NTRS)
Kessler, Donald J.
1990-01-01
The probability of collision between a spacecraft and another object is calculated for various altitude and orbit conditions, and factors affecting the probability are discussed. It is shown that a collision can only occur when the spacecraft is located at an altitude which is between the perigee and apogee altitudes of the object and that the probability per unit time is largest when the orbit of the object is nearly circular. However, at low altitudes, the atmospheric drag causes changes with time of the perigee and the apogee, such that circular orbits have a much shorter lifetime than many of the elliptical orbits. Thus, when the collision probability is integrated over the lifetime of the orbiting object, some elliptical orbits are found to have much higher total collision probability than circular orbits. Rocket bodies used to boost payloads from low earth orbit to geosynchronous orbit are an example of objects in these elliptical orbits.
Probability Quantization for Multiplication-Free Binary Arithmetic Coding
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.
Weir, Scott M; Scott, David E; Salice, Christopher J; Lance, Stacey L
2016-09-01
Chemical contamination is often suggested as an important contributing factor to amphibian population declines, but direct links are rarely reported. Population modeling provides a quantitative method to integrate toxicity data with demographic data to understand the long-term effects of contaminants on population persistence. In this study we use laboratory-derived embryo and larval toxicity data for two anuran species to investigate the potential for toxicity to contribute to population declines. We use the southern toad (Anaxyrus terrestris) and the southern leopard frog (Lithobates sphenocephalus) as model species to investigate copper (Cu) toxicity. We use matrix models to project populations through time and quantify extinction risk (the probability of quasi-extinction in 35 yr). Life-history parameters for toads and frogs were obtained from previously published literature or unpublished data from a long-term (>35 yr) data set. In addition to Cu toxicity, we investigate the role of climate change on amphibian populations by including the probability of early pond drying that results in catastrophic reproductive failure (CRF, i.e., complete mortality of all larval individuals). Our models indicate that CRF is an important parameter for both species as both were unable to persist when CRF probability was >50% for toads or 40% for frogs. Copper toxicity alone did not result in significant effects on extinction risk unless toxicity was very high (>50% reduction in survival parameters). For toads, Cu toxicity and high probability of CRF both resulted in high extinction risk but no synergistic (or greater than additive) effects between the two stressors occurred. For leopard frogs, in the absence of CRF survival was high even under Cu toxicity, but with CRF Cu toxicity increased extinction risk. Our analyses highlight the importance of considering multiple stressors as well as species differences in response to those stressors. Our models were consistently most sensitive to juvenile and adult survival, further suggesting the importance of terrestrial stages to population persistence. Future models will incorporate multiple wetlands with different combinations of stressors to understand if our results for a single wetland result in a population sink within the landscape. © 2016 by the Ecological Society of America.
A probabilistic verification score for contours demonstrated with idealized ice-edge forecasts
NASA Astrophysics Data System (ADS)
Goessling, Helge; Jung, Thomas
2017-04-01
We introduce a probabilistic verification score for ensemble-based forecasts of contours: the Spatial Probability Score (SPS). Defined as the spatial integral of local (Half) Brier Scores, the SPS can be considered the spatial analog of the Continuous Ranked Probability Score (CRPS). Applying the SPS to idealized seasonal ensemble forecasts of the Arctic sea-ice edge in a global coupled climate model, we demonstrate that the SPS responds properly to ensemble size, bias, and spread. When applied to individual forecasts or ensemble means (or quantiles), the SPS is reduced to the 'volume' of mismatch, in case of the ice edge corresponding to the Integrated Ice Edge Error (IIEE).
Elastic K-means using posterior probability.
Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris
2017-01-01
The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model.
Use of collateral information to improve LANDSAT classification accuracies
NASA Technical Reports Server (NTRS)
Strahler, A. H. (Principal Investigator)
1981-01-01
Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.
Relative mass distributions of neutron-rich thermally fissile nuclei within a statistical model
NASA Astrophysics Data System (ADS)
Kumar, Bharat; Kannan, M. T. Senthil; Balasubramaniam, M.; Agrawal, B. K.; Patra, S. K.
2017-09-01
We study the binary mass distribution for the recently predicted thermally fissile neutron-rich uranium and thorium nuclei using a statistical model. The level density parameters needed for the study are evaluated from the excitation energies of the temperature-dependent relativistic mean field formalism. The excitation energy and the level density parameter for a given temperature are employed in the convolution integral method to obtain the probability of the particular fragmentation. As representative cases, we present the results for the binary yields of 250U and 254Th. The relative yields are presented for three different temperatures: T =1 , 2, and 3 MeV.
The human mycobiome in health and disease
2013-01-01
The mycobiome, referring primarily to the fungal biota in an environment, is an important component of the human microbiome. Despite its importance, it has remained understudied. New culture-independent approaches to determine microbial diversity, such as next-generation sequencing methods, are greatly broadening our view of fungal importance. An integrative analysis of current studies shows that different body sites harbor specific fungal populations, and that diverse mycobiome patterns are associated with various diseases. By interfacing with other biomes, as well as with the host, the mycobiome probably contributes to the progression of fungus-associated diseases and plays an important role in health and disease. PMID:23899327
An integrated data model to estimate spatiotemporal occupancy, abundance, and colonization dynamics
Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Esslinger, George G.; Bower, Michael R.; Hefley, Trevor J.
2017-01-01
Ecological invasions and colonizations occur dynamically through space and time. Estimating the distribution and abundance of colonizing species is critical for efficient management or conservation. We describe a statistical framework for simultaneously estimating spatiotemporal occupancy and abundance dynamics of a colonizing species. Our method accounts for several issues that are common when modeling spatiotemporal ecological data including multiple levels of detection probability, multiple data sources, and computational limitations that occur when making fine-scale inference over a large spatiotemporal domain. We apply the model to estimate the colonization dynamics of sea otters (Enhydra lutris) in Glacier Bay, in southeastern Alaska.
Random breakup of microdroplets for single-cell encapsulation
NASA Astrophysics Data System (ADS)
Um, Eujin; Lee, Seung-Goo; Park, Je-Kyun
2010-10-01
Microfluidic droplet-based technology enables encapsulation of cells in the isolated aqueous chambers surrounded by immiscible fluid but single-cell encapsulation efficiency is usually less than 30%. In this letter, we introduce a simple microgroove structure to break droplets into random sizes which further allows collecting of single-cell [Escherichia coli (E. coli)] containing droplets by their size differences. Pinched-flow separation method is integrated to sort out droplets of certain sizes which have high probability of containing one cell. Consequently, we were able to obtain more than 50% of droplets having single E. coli inside, keeping the proportion of multiple-cell containing droplets less than 16%.
Spatial distribution of traffic in a cellular mobile data network
NASA Astrophysics Data System (ADS)
Linnartz, J. P. M. G.
1987-02-01
The use of integral transforms of the probability density function for the received power to analyze the relation between the spatial distributions of offered and throughout packet traffic in a mobile radio network with Rayleigh fading channels and ALOHA multiple access was assessed. A method to obtain the spatial distribution of throughput traffic from a prescribed spatial distribution of offered traffic is presented. Incoherent and coherent addition of interference signals is considered. The channel behavior for heavy traffic loads is studied. In both the incoherent and coherent case, the spatial distribution of offered traffic required to ensure a prescribed spatially uniform throughput is synthesized numerically.
NASA Astrophysics Data System (ADS)
Lautze, N. C.; Ito, G.; Thomas, D. M.; Hinz, N.; Frazer, L. N.; Waller, D.
2015-12-01
Hawaii offers the opportunity to gain knowledge and develop geothermal energy on the only oceanic hotspot in the U.S. As a remote island state, Hawaii is more dependent on imported fossil fuel than any other state in the U.S., and energy prices are 3 to 4 times higher than the national average. The only proven resource, located on Hawaii Island's active Kilauea volcano, is a region of high geologic risk; other regions of probable resource exist but lack adequate assessment. The last comprehensive statewide geothermal assessment occurred in 1983 and found a potential resource on all islands (Hawaii Institute of Geophysics, 1983). Phase 1 of a Department of Energy funded project to assess the probability of geothermal resource potential statewide in Hawaii was recently completed. The execution of this project was divided into three main tasks: (1) compile all historical and current data for Hawaii that is relevant to geothermal resources into a single Geographic Information System (GIS) project; (2) analyze and rank these datasets in terms of their relevance to the three primary properties of a viable geothermal resource: heat (H), fluid (F), and permeability (P); and (3) develop and apply a Bayesian statistical method to incorporate the ranks and produce probability models that map out Hawaii's geothermal resource potential. Here, we summarize the project methodology and present maps that highlight both high prospect areas as well as areas that lack enough data to make an adequate assessment. We suggest a path for future exploration activities in Hawaii, and discuss how this method of analysis can be adapted to other regions and other types of resources. The figure below shows multiple layers of GIS data for Hawaii Island. Color shades indicate crustal density anomalies produced from inversions of gravity (Flinders et al. 2013). Superimposed on this are mapped calderas, rift zones, volcanic cones, and faults (following Sherrod et al., 2007). These features were used to identify probable locations of intrusive rock (heat) and permeability.
NASA Astrophysics Data System (ADS)
Dahm, Torsten; Cesca, Simone; Hainzl, Sebastian; Braun, Thomas; Krüger, Frank
2015-04-01
Earthquakes occurring close to hydrocarbon fields under production are often under critical view of being induced or triggered. However, clear and testable rules to discriminate the different events have rarely been developed and tested. The unresolved scientific problem may lead to lengthy public disputes with unpredictable impact on the local acceptance of the exploitation and field operations. We propose a quantitative approach to discriminate induced, triggered, and natural earthquakes, which is based on testable input parameters. Maxima of occurrence probabilities are compared for the cases under question, and a single probability of being triggered or induced is reported. The uncertainties of earthquake location and other input parameters are considered in terms of the integration over probability density functions. The probability that events have been human triggered/induced is derived from the modeling of Coulomb stress changes and a rate and state-dependent seismicity model. In our case a 3-D boundary element method has been adapted for the nuclei of strain approach to estimate the stress changes outside the reservoir, which are related to pore pressure changes in the field formation. The predicted rate of natural earthquakes is either derived from the background seismicity or, in case of rare events, from an estimate of the tectonic stress rate. Instrumentally derived seismological information on the event location, source mechanism, and the size of the rupture plane is of advantage for the method. If the rupture plane has been estimated, the discrimination between induced or only triggered events is theoretically possible if probability functions are convolved with a rupture fault filter. We apply the approach to three recent main shock events: (1) the Mw 4.3 Ekofisk 2001, North Sea, earthquake close to the Ekofisk oil field; (2) the Mw 4.4 Rotenburg 2004, Northern Germany, earthquake in the vicinity of the Söhlingen gas field; and (3) the Mw 6.1 Emilia 2012, Northern Italy, earthquake in the vicinity of a hydrocarbon reservoir. The three test cases cover the complete range of possible causes: clearly "human induced," "not even human triggered," and a third case in between both extremes.
Assessing the present and future probability of Hurricane Harvey's rainfall
NASA Astrophysics Data System (ADS)
Emanuel, Kerry
2017-11-01
We estimate, for current and future climates, the annual probability of areally averaged hurricane rain of Hurricane Harvey's magnitude by downscaling large numbers of tropical cyclones from three climate reanalyses and six climate models. For the state of Texas, we estimate that the annual probability of 500 mm of area-integrated rainfall was about 1% in the period 1981–2000 and will increase to 18% over the period 2081–2100 under Intergovernmental Panel on Climate Change (IPCC) AR5 representative concentration pathway 8.5. If the frequency of such event is increasingly linearly between these two periods, then in 2017 the annual probability would be 6%, a sixfold increase since the late 20th century.
NASA Technical Reports Server (NTRS)
Kastner, S. O.
1976-01-01
Forbidden transition probabilities are given for ground term transitions of ions in the isoelectronic sequences with outer configurations 2s2 2p (B I), 2p5 (F I), 3s2 3p (Al I), and 3p5 (Cl I). Tables give, for each ion, the ground term interval, the associated wavelength, the quadrupole radial integral, the electric quadrupole transition probability, and the magnetic dipole transition probability. Coronal lines due to some of these ions have been observed, while others are yet to be observed. The tales for the Al I and Cl I sequences include elements up to germanium.
A Bayesian Framework of Uncertainties Integration in 3D Geological Model
NASA Astrophysics Data System (ADS)
Liang, D.; Liu, X.
2017-12-01
3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.
Rakkiyappan, R; Sakthivel, N; Cao, Jinde
2015-06-01
This study examines the exponential synchronization of complex dynamical networks with control packet loss and additive time-varying delays. Additionally, sampled-data controller with time-varying sampling period is considered and is assumed to switch between m different values in a random way with given probability. Then, a novel Lyapunov-Krasovskii functional (LKF) with triple integral terms is constructed and by using Jensen's inequality and reciprocally convex approach, sufficient conditions under which the dynamical network is exponentially mean-square stable are derived. When applying Jensen's inequality to partition double integral terms in the derivation of linear matrix inequality (LMI) conditions, a new kind of linear combination of positive functions weighted by the inverses of squared convex parameters appears. In order to handle such a combination, an effective method is introduced by extending the lower bound lemma. To design the sampled-data controller, the synchronization error system is represented as a switched system. Based on the derived LMI conditions and average dwell-time method, sufficient conditions for the synchronization of switched error system are derived in terms of LMIs. Finally, numerical example is employed to show the effectiveness of the proposed methods. Copyright © 2015 Elsevier Ltd. All rights reserved.
Vigelius, Matthias; Meyer, Bernd
2012-01-01
For many biological applications, a macroscopic (deterministic) treatment of reaction-drift-diffusion systems is insufficient. Instead, one has to properly handle the stochastic nature of the problem and generate true sample paths of the underlying probability distribution. Unfortunately, stochastic algorithms are computationally expensive and, in most cases, the large number of participating particles renders the relevant parameter regimes inaccessible. In an attempt to address this problem we present a genuine stochastic, multi-dimensional algorithm that solves the inhomogeneous, non-linear, drift-diffusion problem on a mesoscopic level. Our method improves on existing implementations in being multi-dimensional and handling inhomogeneous drift and diffusion. The algorithm is well suited for an implementation on data-parallel hardware architectures such as general-purpose graphics processing units (GPUs). We integrate the method into an operator-splitting approach that decouples chemical reactions from the spatial evolution. We demonstrate the validity and applicability of our algorithm with a comprehensive suite of standard test problems that also serve to quantify the numerical accuracy of the method. We provide a freely available, fully functional GPU implementation. Integration into Inchman, a user-friendly web service, that allows researchers to perform parallel simulations of reaction-drift-diffusion systems on GPU clusters is underway. PMID:22506001
NASA Astrophysics Data System (ADS)
Ma, Lijun
2001-11-01
A recent multi-institutional clinical study suggested possible benefits of lowering the prescription isodose lines for stereotactic radiosurgery procedures. In this study, we investigate the dependence of the normal brain integral dose and the normal tissue complication probability (NTCP) on the prescription isodose values for γ-knife radiosurgery. An analytical dose model was developed for γ-knife treatment planning. The dose model was commissioned by fitting the measured dose profiles for each helmet size. The dose model was validated by comparing its results with the Leksell gamma plan (LGP, version 5.30) calculations. The normal brain integral dose and the NTCP were computed and analysed for an ensemble of treatment cases. The functional dependence of the normal brain integral dose and the NCTP versus the prescribing isodose values was studied for these cases. We found that the normal brain integral dose and the NTCP increase significantly when lowering the prescription isodose lines from 50% to 35% of the maximum tumour dose. Alternatively, the normal brain integral dose and the NTCP decrease significantly when raising the prescribing isodose lines from 50% to 65% of the maximum tumour dose. The results may be used as a guideline for designing future dose escalation studies for γ-knife applications.
Guédon, Gérard; Libante, Virginie; Coluzzi, Charles; Payot, Sophie
2017-01-01
Conjugation is a key mechanism of bacterial evolution that involves mobile genetic elements. Recent findings indicated that the main actors of conjugative transfer are not the well-known conjugative or mobilizable plasmids but are the integrated elements. This paper reviews current knowledge on “integrative and mobilizable elements” (IMEs) that have recently been shown to be highly diverse and highly widespread but are still rarely described. IMEs encode their own excision and integration and use the conjugation machinery of unrelated co-resident conjugative element for their own transfer. Recent studies revealed a much more complex and much more diverse lifecycle than initially thought. Besides their main transmission as integrated elements, IMEs probably use plasmid-like strategies to ensure their maintenance after excision. Their interaction with conjugative elements reveals not only harmless hitchhikers but also hunters that use conjugative elements as target for their integration or harmful parasites that subvert the conjugative apparatus of incoming elements to invade cells that harbor them. IMEs carry genes conferring various functions, such as resistance to antibiotics, that can enhance the fitness of their hosts and that contribute to their maintenance in bacterial populations. Taken as a whole, IMEs are probably major contributors to bacterial evolution. PMID:29165361
Sounds can boost the awareness of visual events through attention without cross-modal integration.
Pápai, Márta Szabina; Soto-Faraco, Salvador
2017-01-31
Cross-modal interactions can lead to enhancement of visual perception, even for visual events below awareness. However, the underlying mechanism is still unclear. Can purely bottom-up cross-modal integration break through the threshold of awareness? We used a binocular rivalry paradigm to measure perceptual switches after brief flashes or sounds which, sometimes, co-occurred. When flashes at the suppressed eye coincided with sounds, perceptual switches occurred the earliest. Yet, contrary to the hypothesis of cross-modal integration, this facilitation never surpassed the assumption of probability summation of independent sensory signals. A follow-up experiment replicated the same pattern of results using silent gaps embedded in continuous noise, instead of sounds. This manipulation should weaken putative sound-flash integration, although keep them salient as bottom-up attention cues. Additional results showed that spatial congruency between flashes and sounds did not determine the effectiveness of cross-modal facilitation, which was again not better than probability summation. Thus, the present findings fail to fully support the hypothesis of bottom-up cross-modal integration, above and beyond the independent contribution of two transient signals, as an account for cross-modal enhancement of visual events below level of awareness.
Pressman, Alice; Jacobson, Alice; Eguilos, Roderick; Gelfand, Amy; Huynh, Cynthia; Hamilton, Luisa; Avins, Andrew; Bakshi, Nandini; Merikangas, Kathleen
2016-01-01
Introduction The growing availability of electronic health data provides an opportunity to ascertain diagnosis-specific cases via systematic methods for sample recruitment for clinical research and health services evaluation. We developed and implemented a migraine probability algorithm (MPA) to identify migraine from electronic health records (EHR) in an integrated health plan. Methods We identified all migraine outpatient diagnoses and all migraine-specific prescriptions for a five-year period (April 2008–March 2013) from the Kaiser Permanente, Northern California (KPNC) EHR. We developed and evaluated the MPA in two independent samples, and derived prevalence estimates of medically-ascertained migraine in KPNC by age, sex, and race. Results The period prevalence of medically-ascertained migraine among KPNC adults during April 2008–March 2013 was 10.3% (women: 15.5%, men: 4.5%). Estimates peaked with age in women but remained flat for men. Prevalence among Asians was half that of whites. Conclusions We demonstrate the feasibility of an EHR-based algorithm to identify cases of diagnosed migraine and determine that prevalence patterns by our methods yield results comparable to aggregate estimates of treated migraine based on direct interviews in population-based samples. This inexpensive, easily applied EHR-based algorithm provides a new opportunity for monitoring changes in migraine prevalence and identifying potential participants for research studies. PMID:26069243
NASA Astrophysics Data System (ADS)
Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin
2014-07-01
Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.
Intervals for posttest probabilities: a comparison of 5 methods.
Mossman, D; Berger, J O
2001-01-01
Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humberto E. Garcia
This paper illustrates safeguards benefits that process monitoring (PM) can have as a diversion deterrent and as a complementary safeguards measure to nuclear material accountancy (NMA). In order to infer the possible existence of proliferation-driven activities, the objective of NMA-based methods is often to statistically evaluate materials unaccounted for (MUF) computed by solving a given mass balance equation related to a material balance area (MBA) at every material balance period (MBP), a particular objective for a PM-based approach may be to statistically infer and evaluate anomalies unaccounted for (AUF) that may have occurred within a MBP. Although possibly being indicativemore » of proliferation-driven activities, the detection and tracking of anomaly patterns is not trivial because some executed events may be unobservable or unreliably observed as others. The proposed similarity between NMA- and PM-based approaches is important as performance metrics utilized for evaluating NMA-based methods, such as detection probability (DP) and false alarm probability (FAP), can also be applied for assessing PM-based safeguards solutions. To this end, AUF count estimates can be translated into significant quantity (SQ) equivalents that may have been diverted within a given MBP. A diversion alarm is reported if this mass estimate is greater than or equal to the selected value for alarm level (AL), appropriately chosen to optimize DP and FAP based on the particular characteristics of the monitored MBA, the sensors utilized, and the data processing method employed for integrating and analyzing collected measurements. To illustrate the application of the proposed PM approach, a protracted diversion of Pu in a waste stream was selected based on incomplete fuel dissolution in a dissolver unit operation, as this diversion scenario is considered to be problematic for detection using NMA-based methods alone. Results demonstrate benefits of conducting PM under a system-centric strategy that utilizes data collected from a system of sensors and that effectively exploits known characterizations of sensors and facility operations in order to significantly improve anomaly detection, reduce false alarm, and enhance assessment robustness under unreliable partial sensor information.« less
Using harmonic oscillators to determine the spot size of Hermite-Gaussian laser beams
NASA Technical Reports Server (NTRS)
Steely, Sidney L.
1993-01-01
The similarity of the functional forms of quantum mechanical harmonic oscillators and the modes of Hermite-Gaussian laser beams is illustrated. This functional similarity provides a direct correlation to investigate the spot size of large-order mode Hermite-Gaussian laser beams. The classical limits of a corresponding two-dimensional harmonic oscillator provide a definition of the spot size of Hermite-Gaussian laser beams. The classical limits of the harmonic oscillator provide integration limits for the photon probability densities of the laser beam modes to determine the fraction of photons detected therein. Mathematica is used to integrate the probability densities for large-order beam modes and to illustrate the functional similarities. The probabilities of detecting photons within the classical limits of Hermite-Gaussian laser beams asymptotically approach unity in the limit of large-order modes, in agreement with the Correspondence Principle. The classical limits for large-order modes include all of the nodes for Hermite Gaussian laser beams; Sturm's theorem provides a direct proof.
NASA Astrophysics Data System (ADS)
Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao
2018-03-01
The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.
Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas
2014-07-01
Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Schneider, Harold
1959-01-01
This method is investigated for semi-infinite multiple-slab configurations of arbitrary width, composition, and source distribution. Isotropic scattering in the laboratory system is assumed. Isotropic scattering implies that the fraction of neutrons scattered in the i(sup th) volume element or subregion that will make their next collision in the j(sup th) volume element or subregion is the same for all collisions. These so-called "transfer probabilities" between subregions are calculated and used to obtain successive-collision densities from which the flux and transmission probabilities directly follow. For a thick slab with little or no absorption, a successive-collisions technique proves impractical because an unreasonably large number of collisions must be followed in order to obtain the flux. Here the appropriate integral equation is converted into a set of linear simultaneous algebraic equations that are solved for the average total flux in each subregion. When ordinary diffusion theory applies with satisfactory precision in a portion of the multiple-slab configuration, the problem is solved by ordinary diffusion theory, but the flux is plotted only in the region of validity. The angular distribution of neutrons entering the remaining portion is determined from the known diffusion flux and the remaining region is solved by higher order theory. Several procedures for applying the numerical method are presented and discussed. To illustrate the calculational procedure, a symmetrical slab ia vacuum is worked by the numerical, Monte Carlo, and P(sub 3) spherical harmonics methods. In addition, an unsymmetrical double-slab problem is solved by the numerical and Monte Carlo methods. The numerical approach proved faster and more accurate in these examples. Adaptation of the method to anisotropic scattering in slabs is indicated, although no example is included in this paper.
Nobusako, Satoshi; Sakai, Ayami; Tsujimoto, Taeko; Shuto, Takashi; Nishi, Yuki; Asano, Daiki; Furukawa, Emi; Zama, Takuro; Osumi, Michihiro; Shimada, Sotaro; Morioka, Shu; Nakai, Akio
2018-01-01
The neurological basis of developmental coordination disorder (DCD) is thought to be deficits in the internal model and mirror-neuron system (MNS) in the parietal lobe and cerebellum. However, it is not clear if the visuo-motor temporal integration in the internal model and automatic-imitation function in the MNS differs between children with DCD and those with typical development (TD). The current study aimed to investigate these differences. Using the manual dexterity test of the Movement Assessment Battery for Children (second edition), the participants were either assigned to the probable DCD (pDCD) group or TD group. The former was comprised of 29 children with clumsy manual dexterity, while the latter consisted of 42 children with normal manual dexterity. Visuo-motor temporal integration ability and automatic-imitation function were measured using the delayed visual feedback detection task and motor interference task, respectively. Further, the current study investigated whether autism-spectrum disorder (ASD) traits, attention-deficit hyperactivity disorder (ADHD) traits, and depressive symptoms differed among the two groups, since these symptoms are frequent comorbidities of DCD. In addition, correlation and multiple regression analyses were performed to extract factors affecting clumsy manual dexterity. In the results, the delay-detection threshold (DDT) and steepness of the delay-detection probability curve, which indicated visuo-motor temporal integration ability, were significantly prolonged and decreased, respectively, in children with pDCD. The interference effect, which indicated automatic-imitation function, was also significantly reduced in this group. These results highlighted that children with clumsy manual dexterity have deficits in visuo-motor temporal integration and automatic-imitation function. There was a significant correlation between manual dexterity, and measures of visuo-motor temporal integration, and ASD traits and ADHD traits and ASD. Multiple regression analysis revealed that the DDT, which indicated visuo-motor temporal integration, was the greatest predictor of poor manual dexterity. The current results supported and provided further evidence for the internal model deficit hypothesis. Further, they suggested a neurorehabilitation technique that improved visuo-motor temporal integration could be therapeutically effective for children with DCD.
Nobusako, Satoshi; Sakai, Ayami; Tsujimoto, Taeko; Shuto, Takashi; Nishi, Yuki; Asano, Daiki; Furukawa, Emi; Zama, Takuro; Osumi, Michihiro; Shimada, Sotaro; Morioka, Shu; Nakai, Akio
2018-01-01
The neurological basis of developmental coordination disorder (DCD) is thought to be deficits in the internal model and mirror-neuron system (MNS) in the parietal lobe and cerebellum. However, it is not clear if the visuo-motor temporal integration in the internal model and automatic-imitation function in the MNS differs between children with DCD and those with typical development (TD). The current study aimed to investigate these differences. Using the manual dexterity test of the Movement Assessment Battery for Children (second edition), the participants were either assigned to the probable DCD (pDCD) group or TD group. The former was comprised of 29 children with clumsy manual dexterity, while the latter consisted of 42 children with normal manual dexterity. Visuo-motor temporal integration ability and automatic-imitation function were measured using the delayed visual feedback detection task and motor interference task, respectively. Further, the current study investigated whether autism-spectrum disorder (ASD) traits, attention-deficit hyperactivity disorder (ADHD) traits, and depressive symptoms differed among the two groups, since these symptoms are frequent comorbidities of DCD. In addition, correlation and multiple regression analyses were performed to extract factors affecting clumsy manual dexterity. In the results, the delay-detection threshold (DDT) and steepness of the delay-detection probability curve, which indicated visuo-motor temporal integration ability, were significantly prolonged and decreased, respectively, in children with pDCD. The interference effect, which indicated automatic-imitation function, was also significantly reduced in this group. These results highlighted that children with clumsy manual dexterity have deficits in visuo-motor temporal integration and automatic-imitation function. There was a significant correlation between manual dexterity, and measures of visuo-motor temporal integration, and ASD traits and ADHD traits and ASD. Multiple regression analysis revealed that the DDT, which indicated visuo-motor temporal integration, was the greatest predictor of poor manual dexterity. The current results supported and provided further evidence for the internal model deficit hypothesis. Further, they suggested a neurorehabilitation technique that improved visuo-motor temporal integration could be therapeutically effective for children with DCD. PMID:29556211
Method of self-consistent evaluation of absolute emission probabilities of particles and gamma rays
NASA Astrophysics Data System (ADS)
Badikov, Sergei; Chechev, Valery
2017-09-01
In assumption of well installed decay scheme the method provides a) exact balance relationships, b) lower (compared to the traditional techniques) uncertainties of recommended absolute emission probabilities of particles and gamma rays, c) evaluation of correlations between the recommended emission probabilities (for the same and different decay modes). Application of the method for the decay data evaluation for even curium isotopes led to paradoxical results. The multidimensional confidence regions for the probabilities of the most intensive alpha transitions constructed on the basis of present and the ENDF/B-VII.1, JEFF-3.1, DDEP evaluations are inconsistent whereas the confidence intervals for the evaluated probabilities of single transitions agree with each other.
Improving Conceptual Models Using AEM Data and Probability Distributions
NASA Astrophysics Data System (ADS)
Davis, A. C.; Munday, T. J.; Christensen, N. B.
2012-12-01
With emphasis being placed on uncertainty in groundwater modelling and prediction, coupled with questions concerning the value of geophysical methods in hydrogeology, it is important to ask meaningful questions of hydrogeophysical data and inversion results. For example, to characterise aquifers using electromagnetic (EM) data, we ask questions such as "Given that the electrical conductivity of aquifer 'A' is less than x, where is that aquifer elsewhere in the survey area?" The answer may be given by examining inversion models, selecting locations and layers that satisfy the condition 'conductivity <= x', and labelling them as aquifer 'A'. One difficulty with this approach is that the inversion model result often be considered to be the only model for the data. In reality it is just one image of the subsurface that, given the method and the regularisation imposed in the inversion, agrees with measured data within a given error bound. We have no idea whether the final model realised by the inversion satisfies the global minimum error, or whether it is simply in a local minimum. There is a distribution of inversion models that satisfy the error tolerance condition: the final model is not the only one, nor is it necessarily the correct one. AEM inversions are often linearised in the calculation of the parameter sensitivity: we rely on the second derivatives in the Taylor expansion, thus the minimum model has all layer parameters distributed about their mean parameter value with well-defined variance. We investigate the validity of the minimum model, and its uncertainty, by examining the full posterior covariance matrix. We ask questions of the minimum model, and answer them in a probabilistically. The simplest question we can pose is "What is the probability that all layer resistivity values are <= a cut-off value?" We can calculate through use of the erf or the erfc functions. The covariance values of the inversion become marginalised in the integration: only the main diagonal is used. Complications arise when we ask more specific questions, such as "What is the probability that the resistivity of layer 2 <= x, given that layer 1 <= y?" The probability then becomes conditional, calculation includes covariance terms, the integration is taken over many dimensions, and the cross-correlation of parameters becomes important. To illustrate, we examine the inversion results of a Tempest AEM survey over the Uley Basin aquifers in the Eyre Peninsula, South Australia. Key aquifers include the unconfined Bridgewater Formation that overlies the Uley and Wanilla Formations, which contain Tertiary clays and Tertiary sandstone. These Formations overlie weathered basement which define the lower bound of the Uley Basin aquifer systems. By correlating the conductivity of the sub-surface Formation types, we pose questions such as: "What is the probability-depth of the Bridgewater Formation in the Uley South Basin?", "What is the thickness of the Uley Formation?" and "What is the most probable depth to basement?" We use these questions to generate improved conceptual hydrogeological models of the Uley Basin in order to develop better estimates of aquifer extent and the available groundwater resource.
Dynamic Encoding of Speech Sequence Probability in Human Temporal Cortex
Leonard, Matthew K.; Bouchard, Kristofer E.; Tang, Claire
2015-01-01
Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning. PMID:25948269
A probability space for quantum models
NASA Astrophysics Data System (ADS)
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
Comments on the present state and future directions of PDF methods
NASA Technical Reports Server (NTRS)
Obrien, E. E.
1992-01-01
The one point probability density function (PDF) method is examined in light of its use in actual engineering problems. The PDF method, although relatively complicated, appears to be the only format available to handle the nonlinear stochastic difficulties caused by typical reaction kinetics. Turbulence modeling, if it is to play a central role in combustion modeling, has to be integrated with the chemistry in a way which produces accurate numerical solutions to combustion problems. It is questionable whether the development of turbulent models in isolation from the peculiar statistics of reactant concentrations is a fruitful line of development as far as propulsion is concerned. There are three issues for which additional viewgraphs are prepared: the one point pdf method; the amplitude mapping closure; and a hybrid strategy for replacing a full two point pdf treatment of reacting flows by a single point pdf and correlation functions. An appeal is made for the establishment of an adequate data base for compressible flow with reactions for Mach numbers of unity or higher.
Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories
NASA Astrophysics Data System (ADS)
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.
Bayesian approach to spectral function reconstruction for Euclidean quantum field theories.
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33T(C).
A new method for computing the reliability of consecutive k-out-of-n:F systems
NASA Astrophysics Data System (ADS)
Gökdere, Gökhan; Gürcan, Mehmet; Kılıç, Muhammet Burak
2016-01-01
In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.
van der Voet, Hilko; Goedhart, Paul W; Schmidt, Kerstin
2017-11-01
An equivalence testing method is described to assess the safety of regulated products using relevant data obtained in historical studies with assumedly safe reference products. The method is illustrated using data from a series of animal feeding studies with genetically modified and reference maize varieties. Several criteria for quantifying equivalence are discussed, and study-corrected distribution-wise equivalence is selected as being appropriate for the example case study. An equivalence test is proposed based on a high probability of declaring equivalence in a simplified situation, where there is no between-group variation, where the historical and current studies have the same residual variance, and where the current study is assumed to have a sample size as set by a regulator. The method makes use of generalized fiducial inference methods to integrate uncertainties from both the historical and the current data. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Probability Density Functions of Observed Rainfall in Montana
NASA Technical Reports Server (NTRS)
Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.
1995-01-01
The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.
Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2010-01-01
When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's
Crawford, Forrest W.; Suchard, Marc A.
2011-01-01
A birth-death process is a continuous-time Markov chain that counts the number of particles in a system over time. In the general process with n current particles, a new particle is born with instantaneous rate λn and a particle dies with instantaneous rate μn. Currently no robust and efficient method exists to evaluate the finite-time transition probabilities in a general birth-death process with arbitrary birth and death rates. In this paper, we first revisit the theory of continued fractions to obtain expressions for the Laplace transforms of these transition probabilities and make explicit an important derivation connecting transition probabilities and continued fractions. We then develop an efficient algorithm for computing these probabilities that analyzes the error associated with approximations in the method. We demonstrate that this error-controlled method agrees with known solutions and outperforms previous approaches to computing these probabilities. Finally, we apply our novel method to several important problems in ecology, evolution, and genetics. PMID:21984359
Multiclass Posterior Probability Twin SVM for Motor Imagery EEG Classification.
She, Qingshan; Ma, Yuliang; Meng, Ming; Luo, Zhizeng
2015-01-01
Motor imagery electroencephalography is widely used in the brain-computer interface systems. Due to inherent characteristics of electroencephalography signals, accurate and real-time multiclass classification is always challenging. In order to solve this problem, a multiclass posterior probability solution for twin SVM is proposed by the ranking continuous output and pairwise coupling in this paper. First, two-class posterior probability model is constructed to approximate the posterior probability by the ranking continuous output techniques and Platt's estimating method. Secondly, a solution of multiclass probabilistic outputs for twin SVM is provided by combining every pair of class probabilities according to the method of pairwise coupling. Finally, the proposed method is compared with multiclass SVM and twin SVM via voting, and multiclass posterior probability SVM using different coupling approaches. The efficacy on the classification accuracy and time complexity of the proposed method has been demonstrated by both the UCI benchmark datasets and real world EEG data from BCI Competition IV Dataset 2a, respectively.
Lai, Yinglei; Zhang, Fanni; Nayak, Tapan K; Modarres, Reza; Lee, Norman H; McCaffrey, Timothy A
2014-01-01
Gene set enrichment analysis (GSEA) is an important approach to the analysis of coordinate expression changes at a pathway level. Although many statistical and computational methods have been proposed for GSEA, the issue of a concordant integrative GSEA of multiple expression data sets has not been well addressed. Among different related data sets collected for the same or similar study purposes, it is important to identify pathways or gene sets with concordant enrichment. We categorize the underlying true states of differential expression into three representative categories: no change, positive change and negative change. Due to data noise, what we observe from experiments may not indicate the underlying truth. Although these categories are not observed in practice, they can be considered in a mixture model framework. Then, we define the mathematical concept of concordant gene set enrichment and calculate its related probability based on a three-component multivariate normal mixture model. The related false discovery rate can be calculated and used to rank different gene sets. We used three published lung cancer microarray gene expression data sets to illustrate our proposed method. One analysis based on the first two data sets was conducted to compare our result with a previous published result based on a GSEA conducted separately for each individual data set. This comparison illustrates the advantage of our proposed concordant integrative gene set enrichment analysis. Then, with a relatively new and larger pathway collection, we used our method to conduct an integrative analysis of the first two data sets and also all three data sets. Both results showed that many gene sets could be identified with low false discovery rates. A consistency between both results was also observed. A further exploration based on the KEGG cancer pathway collection showed that a majority of these pathways could be identified by our proposed method. This study illustrates that we can improve detection power and discovery consistency through a concordant integrative analysis of multiple large-scale two-sample gene expression data sets.
Off-diagonal long-range order, cycle probabilities, and condensate fraction in the ideal Bose gas.
Chevallier, Maguelonne; Krauth, Werner
2007-11-01
We discuss the relationship between the cycle probabilities in the path-integral representation of the ideal Bose gas, off-diagonal long-range order, and Bose-Einstein condensation. Starting from the Landsberg recursion relation for the canonic partition function, we use elementary considerations to show that in a box of size L3 the sum of the cycle probabilities of length k>L2 equals the off-diagonal long-range order parameter in the thermodynamic limit. For arbitrary systems of ideal bosons, the integer derivative of the cycle probabilities is related to the probability of condensing k bosons. We use this relation to derive the precise form of the pik in the thermodynamic limit. We also determine the function pik for arbitrary systems. Furthermore, we use the cycle probabilities to compute the probability distribution of the maximum-length cycles both at T=0, where the ideal Bose gas reduces to the study of random permutations, and at finite temperature. We close with comments on the cycle probabilities in interacting Bose gases.
The possible social representations of astronomy by students from integrated high school
NASA Astrophysics Data System (ADS)
Barbosa, J. I. L.; Voelzke, M. R.
2017-12-01
In this paper, we present the possible Social Representations, which students of the Integrated High School of the Federal Institute of Alagoas (IFAL) have on the term inductor Astronomy, as well as identifying how they were probably elaborated. Therefore, in agreement with Moscovici (2010) is used the Theory of Social Representations.
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Predictive probability methods for interim monitoring in clinical trials with longitudinal outcomes.
Zhou, Ming; Tang, Qi; Lang, Lixin; Xing, Jun; Tatsuoka, Kay
2018-04-17
In clinical research and development, interim monitoring is critical for better decision-making and minimizing the risk of exposing patients to possible ineffective therapies. For interim futility or efficacy monitoring, predictive probability methods are widely adopted in practice. Those methods have been well studied for univariate variables. However, for longitudinal studies, predictive probability methods using univariate information from only completers may not be most efficient, and data from on-going subjects can be utilized to improve efficiency. On the other hand, leveraging information from on-going subjects could allow an interim analysis to be potentially conducted once a sufficient number of subjects reach an earlier time point. For longitudinal outcomes, we derive closed-form formulas for predictive probabilities, including Bayesian predictive probability, predictive power, and conditional power and also give closed-form solutions for predictive probability of success in a future trial and the predictive probability of success of the best dose. When predictive probabilities are used for interim monitoring, we study their distributions and discuss their analytical cutoff values or stopping boundaries that have desired operating characteristics. We show that predictive probabilities utilizing all longitudinal information are more efficient for interim monitoring than that using information from completers only. To illustrate their practical application for longitudinal data, we analyze 2 real data examples from clinical trials. Copyright © 2018 John Wiley & Sons, Ltd.
Topological and Orthomodular Modeling of Context in Behavioral Science
NASA Astrophysics Data System (ADS)
Narens, Louis
2017-02-01
Two non-boolean methods are discussed for modeling context in behavioral data and theory. The first is based on intuitionistic logic, which is similar to classical logic except that not every event has a complement. Its probability theory is also similar to classical probability theory except that the definition of probability function needs to be generalized to unions of events instead of applying only to unions of disjoint events. The generalization is needed, because intuitionistic event spaces may not contain enough disjoint events for the classical definition to be effective. The second method develops a version of quantum logic for its underlying probability theory. It differs from Hilbert space logic used in quantum mechanics as a foundation for quantum probability theory in variety of ways. John von Neumann and others have commented about the lack of a relative frequency approach and a rational foundation for this probability theory. This article argues that its version of quantum probability theory does not have such issues. The method based on intuitionistic logic is useful for modeling cognitive interpretations that vary with context, for example, the mood of the decision maker, the context produced by the influence of other items in a choice experiment, etc. The method based on this article's quantum logic is useful for modeling probabilities across contexts, for example, how probabilities of events from different experiments are related.
Imprecise Probability Methods for Weapons UQ
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picard, Richard Roy; Vander Wiel, Scott Alan
Building on recent work in uncertainty quanti cation, we examine the use of imprecise probability methods to better characterize expert knowledge and to improve on misleading aspects of Bayesian analysis with informative prior distributions. Quantitative approaches to incorporate uncertainties in weapons certi cation are subject to rigorous external peer review, and in this regard, certain imprecise probability methods are well established in the literature and attractive. These methods are illustrated using experimental data from LANL detonator impact testing.
The estimation of tree posterior probabilities using conditional clade probability distributions.
Larget, Bret
2013-07-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.
Autonomous rock detection on mars through region contrast
NASA Astrophysics Data System (ADS)
Xiao, Xueming; Cui, Hutao; Yao, Meibao; Tian, Yang
2017-08-01
In this paper, we present a new autonomous rock detection approach through region contrast. Unlike current state-of-art pixel-level rock segmenting methods, new method deals with this issue in region level, which will significantly reduce the computational cost. Image is firstly splitted into homogeneous regions based on intensity information and spatial layout. Considering the high-water memory constraints of onboard flight processor, only low-level features, average intensity and variation of superpixel, are measured. Region contrast is derived as the integration of intensity contrast and smoothness measurement. Rocks are then segmented from the resulting contrast map by an adaptive threshold. Since the merely intensity-based method may cause false detection in background areas with different illuminations from surroundings, a more reliable method is further proposed by introducing spatial factor and background similarity to the region contrast. Spatial factor demonstrates the locality of contrast, while background similarity calculates the probability of each subregion belonging to background. Our method is efficient in dealing with large images and only few parameters are needed. Preliminary experimental results show that our algorithm outperforms edge-based methods in various grayscale rover images.
Methods, apparatus and system for notification of predictable memory failure
Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.
2017-01-03
A method for providing notification of a predictable memory failure includes the steps of: obtaining information regarding at least one condition associated with a memory; calculating a memory failure probability as a function of the obtained information; calculating a failure probability threshold; and generating a signal when the memory failure probability exceeds the failure probability threshold, the signal being indicative of a predicted future memory failure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baron-Aznar, C.; Moreno-Jimenez, S.; Celis, M. A.
2008-08-11
Integrated dose is the total energy delivered in a radiotherapy target. This physical parameter could be a predictor for complications such as brain edema and radionecrosis after stereotactic radiotherapy treatments for brain tumors. Integrated Dose depends on the tissue density and volume. Using CT patients images from the National Institute of Neurology and Neurosurgery and BrainScan(c) software, this work presents the mean density of 21 multiform glioblastomas, comparative results for normal tissue and estimated integrated dose for each case. The relationship between integrated dose and the probability of complications is discussed.
Reliability analysis of redundant systems. [a method to compute transition probabilities
NASA Technical Reports Server (NTRS)
Yeh, H. Y.
1974-01-01
A method is proposed to compute the transition probability (the probability of partial or total failure) of parallel redundant system. The effect of geometry of the system, the direction of load, and the degree of redundancy on the probability of complete survival of parachute-like system are also studied. The results show that the probability of complete survival of three-member parachute-like system is very sensitive to the variation of horizontal angle of the load. However, it becomes very insignificant as the degree of redundancy increases.
Noisy cooperative intermittent processes: From blinking quantum dots to human consciousness
NASA Astrophysics Data System (ADS)
Allegrini, Paolo; Paradisi, Paolo; Menicucci, Danilo; Bedini, Remo; Gemignani, Angelo; Fronzoni, Leone
2011-07-01
We study the superposition of a non-Poisson renewal process with the presence of a superimposed Poisson noise. The non-Poisson renewals mark the passage between meta-stable states in system with self-organization. We propose methods to measure the amount of information due to the two independent processes independently, and we see that a superficial study based on the survival probabilities yield stretched-exponential relaxations. Our method is in fact able to unravel the inverse-power law relaxation of the isolated non-Poisson processes, even when noise is present. We provide examples of this behavior in system of diverse nature, from blinking nano-crystals to weak turbulence. Finally we focus our discussion on events extracted from human electroencephalograms, and we discuss their connection with emerging properties of integrated neural dynamics, i.e. consciousness.