Calculation of the Poisson cumulative distribution function
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.
1990-01-01
A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.
NASA Astrophysics Data System (ADS)
Qiang, Ji
2017-10-01
A three-dimensional (3D) Poisson solver with longitudinal periodic and transverse open boundary conditions can have important applications in beam physics of particle accelerators. In this paper, we present a fast efficient method to solve the Poisson equation using a spectral finite-difference method. This method uses a computational domain that contains the charged particle beam only and has a computational complexity of O(Nu(logNmode)) , where Nu is the total number of unknowns and Nmode is the maximum number of longitudinal or azimuthal modes. This saves both the computational time and the memory usage of using an artificial boundary condition in a large extended computational domain. The new 3D Poisson solver is parallelized using a message passing interface (MPI) on multi-processor computers and shows a reasonable parallel performance up to hundreds of processor cores.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne E.
2013-01-01
We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne
2011-11-01
We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.
Poplová, Michaela; Sovka, Pavel; Cifra, Michal
2017-01-01
Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.
Poplová, Michaela; Sovka, Pavel
2017-01-01
Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal. PMID:29216207
Womack, James C; Anton, Lucian; Dziedzic, Jacek; Hasnip, Phil J; Probert, Matt I J; Skylaris, Chris-Kriton
2018-03-13
The solution of the Poisson equation is a crucial step in electronic structure calculations, yielding the electrostatic potential-a key component of the quantum mechanical Hamiltonian. In recent decades, theoretical advances and increases in computer performance have made it possible to simulate the electronic structure of extended systems in complex environments. This requires the solution of more complicated variants of the Poisson equation, featuring nonhomogeneous dielectric permittivities, ionic concentrations with nonlinear dependencies, and diverse boundary conditions. The analytic solutions generally used to solve the Poisson equation in vacuum (or with homogeneous permittivity) are not applicable in these circumstances, and numerical methods must be used. In this work, we present DL_MG, a flexible, scalable, and accurate solver library, developed specifically to tackle the challenges of solving the Poisson equation in modern large-scale electronic structure calculations on parallel computers. Our solver is based on the multigrid approach and uses an iterative high-order defect correction method to improve the accuracy of solutions. Using two chemically relevant model systems, we tested the accuracy and computational performance of DL_MG when solving the generalized Poisson and Poisson-Boltzmann equations, demonstrating excellent agreement with analytic solutions and efficient scaling to ∼10 9 unknowns and 100s of CPU cores. We also applied DL_MG in actual large-scale electronic structure calculations, using the ONETEP linear-scaling electronic structure package to study a 2615 atom protein-ligand complex with routinely available computational resources. In these calculations, the overall execution time with DL_MG was not significantly greater than the time required for calculations using a conventional FFT-based solver.
NASA Technical Reports Server (NTRS)
Fujii, K.
1983-01-01
A method for generating three dimensional, finite difference grids about complicated geometries by using Poisson equations is developed. The inhomogenous terms are automatically chosen such that orthogonality and spacing restrictions at the body surface are satisfied. Spherical variables are used to avoid the axis singularity, and an alternating-direction-implicit (ADI) solution scheme is used to accelerate the computations. Computed results are presented that show the capability of the method. Since most of the results presented have been used as grids for flow-field computations, this is indicative that the method is a useful tool for generating three-dimensional grids about complicated geometries.
NASA Technical Reports Server (NTRS)
Young, D. P.; Woo, A. C.; Bussoletti, J. E.; Johnson, F. T.
1986-01-01
A general method is developed combining fast direct methods and boundary integral equation methods to solve Poisson's equation on irregular exterior regions. The method requires O(N log N) operations where N is the number of grid points. Error estimates are given that hold for regions with corners and other boundary irregularities. Computational results are given in the context of computational aerodynamics for a two-dimensional lifting airfoil. Solutions of boundary integral equations for lifting and nonlifting aerodynamic configurations using preconditioned conjugate gradient are examined for varying degrees of thinness.
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
1978-12-01
Poisson processes . The method is valid for Poisson processes with any given intensity function. The basic thinning algorithm is modified to exploit several refinements which reduce computer execution time by approximately one-third. The basic and modified thinning programs are compared with the Poisson decomposition and gap-statistics algorithm, which is easily implemented for Poisson processes with intensity functions of the form exp(a sub 0 + a sub 1t + a sub 2 t-squared. The thinning programs are competitive in both execution
Computation of solar perturbations with Poisson series
NASA Technical Reports Server (NTRS)
Broucke, R.
1974-01-01
Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.
Noise parameter estimation for poisson corrupted images using variance stabilization transforms.
Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo
2014-03-01
Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.
Maslov indices, Poisson brackets, and singular differential forms
NASA Astrophysics Data System (ADS)
Esterlis, I.; Haggard, H. M.; Hedeman, A.; Littlejohn, R. G.
2014-06-01
Maslov indices are integers that appear in semiclassical wave functions and quantization conditions. They are often notoriously difficult to compute. We present methods of computing the Maslov index that rely only on typically elementary Poisson brackets and simple linear algebra. We also present a singular differential form, whose integral along a curve gives the Maslov index of that curve. The form is closed but not exact, and transforms by an exact differential under canonical transformations. We illustrate the method with the 6j-symbol, which is important in angular-momentum theory and in quantum gravity.
Fast and Accurate Poisson Denoising With Trainable Nonlinear Diffusion.
Feng, Wensen; Qiao, Peng; Chen, Yunjin; Wensen Feng; Peng Qiao; Yunjin Chen; Feng, Wensen; Chen, Yunjin; Qiao, Peng
2018-06-01
The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts. However, the straightforward direct gradient descent employed in the original TNRD-based denoising task is not applicable in this paper. To solve this problem, we resort to the proximal gradient descent method. We retrain the model parameters, including the linear filters and influence functions by taking into account the Poisson noise statistics, and end up with a well-trained nonlinear diffusion model specialized for Poisson denoising. The trained model provides strongly competitive results against state-of-the-art approaches, meanwhile bearing the properties of simple structure and high efficiency. Furthermore, our proposed model comes along with an additional advantage, that the diffusion process is well-suited for parallel computation on graphics processing units (GPUs). For images of size , our GPU implementation takes less than 0.1 s to produce state-of-the-art Poisson denoising performance.
A GPU accelerated and error-controlled solver for the unbounded Poisson equation in three dimensions
NASA Astrophysics Data System (ADS)
Exl, Lukas
2017-12-01
An efficient solver for the three dimensional free-space Poisson equation is presented. The underlying numerical method is based on finite Fourier series approximation. While the error of all involved approximations can be fully controlled, the overall computation error is driven by the convergence of the finite Fourier series of the density. For smooth and fast-decaying densities the proposed method will be spectrally accurate. The method scales with O(N log N) operations, where N is the total number of discretization points in the Cartesian grid. The majority of the computational costs come from fast Fourier transforms (FFT), which makes it ideal for GPU computation. Several numerical computations on CPU and GPU validate the method and show efficiency and convergence behavior. Tests are performed using the Vienna Scientific Cluster 3 (VSC3). A free MATLAB implementation for CPU and GPU is provided to the interested community.
Bajaj, Chandrajit; Chen, Shun-Chuan; Rand, Alexander
2011-01-01
In order to compute polarization energy of biomolecules, we describe a boundary element approach to solving the linearized Poisson-Boltzmann equation. Our approach combines several important features including the derivative boundary formulation of the problem and a smooth approximation of the molecular surface based on the algebraic spline molecular surface. State of the art software for numerical linear algebra and the kernel independent fast multipole method is used for both simplicity and efficiency of our implementation. We perform a variety of computational experiments, testing our method on a number of actual proteins involved in molecular docking and demonstrating the effectiveness of our solver for computing molecular polarization energy. PMID:21660123
Lu, Benzhuo; Zhou, Y C; Huber, Gary A; Bond, Stephen D; Holst, Michael J; McCammon, J Andrew
2007-10-07
A computational framework is presented for the continuum modeling of cellular biomolecular diffusion influenced by electrostatic driving forces. This framework is developed from a combination of state-of-the-art numerical methods, geometric meshing, and computer visualization tools. In particular, a hybrid of (adaptive) finite element and boundary element methods is adopted to solve the Smoluchowski equation (SE), the Poisson equation (PE), and the Poisson-Nernst-Planck equation (PNPE) in order to describe electrodiffusion processes. The finite element method is used because of its flexibility in modeling irregular geometries and complex boundary conditions. The boundary element method is used due to the convenience of treating the singularities in the source charge distribution and its accurate solution to electrostatic problems on molecular boundaries. Nonsteady-state diffusion can be studied using this framework, with the electric field computed using the densities of charged small molecules and mobile ions in the solvent. A solution for mesh generation for biomolecular systems is supplied, which is an essential component for the finite element and boundary element computations. The uncoupled Smoluchowski equation and Poisson-Boltzmann equation are considered as special cases of the PNPE in the numerical algorithm, and therefore can be solved in this framework as well. Two types of computations are reported in the results: stationary PNPE and time-dependent SE or Nernst-Planck equations solutions. A biological application of the first type is the ionic density distribution around a fragment of DNA determined by the equilibrium PNPE. The stationary PNPE with nonzero flux is also studied for a simple model system, and leads to an observation that the interference on electrostatic field of the substrate charges strongly affects the reaction rate coefficient. The second is a time-dependent diffusion process: the consumption of the neurotransmitter acetylcholine by acetylcholinesterase, determined by the SE and a single uncoupled solution of the Poisson-Boltzmann equation. The electrostatic effects, counterion compensation, spatiotemporal distribution, and diffusion-controlled reaction kinetics are analyzed and different methods are compared.
NASA Technical Reports Server (NTRS)
Rosenfeld, Moshe; Kwak, Dochan; Vinokur, Marcel
1992-01-01
A fractional step method is developed for solving the time-dependent three-dimensional incompressible Navier-Stokes equations in generalized coordinate systems. The primitive variable formulation uses the pressure, defined at the center of the computational cell, and the volume fluxes across the faces of the cells as the dependent variables, instead of the Cartesian components of the velocity. This choice is equivalent to using the contravariant velocity components in a staggered grid multiplied by the volume of the computational cell. The governing equations are discretized by finite volumes using a staggered mesh system. The solution of the continuity equation is decoupled from the momentum equations by a fractional step method which enforces mass conservation by solving a Poisson equation. This procedure, combined with the consistent approximations of the geometric quantities, is done to satisfy the discretized mass conservation equation to machine accuracy, as well as to gain the favorable convergence properties of the Poisson solver. The momentum equations are solved by an approximate factorization method, and a novel ZEBRA scheme with four-color ordering is devised for the efficient solution of the Poisson equation. Several two- and three-dimensional laminar test cases are computed and compared with other numerical and experimental results to validate the solution method. Good agreement is obtained in all cases.
Complex wet-environments in electronic-structure calculations
NASA Astrophysics Data System (ADS)
Fisicaro, Giuseppe; Genovese, Luigi; Andreussi, Oliviero; Marzari, Nicola; Goedecker, Stefan
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of an applied electrochemical potentials, including complex electrostatic screening coming from the solvent. In the present work we present a solver to handle both the Generalized Poisson and the Poisson-Boltzmann equation. A preconditioned conjugate gradient (PCG) method has been implemented for the Generalized Poisson and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations. On the other hand, a self-consistent procedure enables us to solve the Poisson-Boltzmann problem. The algorithms take advantage of a preconditioning procedure based on the BigDFT Poisson solver for the standard Poisson equation. They exhibit very high accuracy and parallel efficiency, and allow different boundary conditions, including surfaces. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and it will be released as a independent program, suitable for integration in other codes. We present test calculations for large proteins to demonstrate efficiency and performances. This work was done within the PASC and NCCR MARVEL projects. Computer resources were provided by the Swiss National Supercomputing Centre (CSCS) under Project ID s499. LG acknowledges also support from the EXTMOS EU project.
An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-03-08
Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation
NASA Astrophysics Data System (ADS)
Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab
2015-05-01
3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.
NASA Astrophysics Data System (ADS)
Kacem, S.; Eichwald, O.; Ducasse, O.; Renon, N.; Yousfi, M.; Charrada, K.
2012-01-01
Streamers dynamics are characterized by the fast propagation of ionized shock waves at the nanosecond scale under very sharp space charge variations. The streamer dynamics modelling needs the solution of charged particle transport equations coupled to the elliptic Poisson's equation. The latter has to be solved at each time step of the streamers evolution in order to follow the propagation of the resulting space charge electric field. In the present paper, a full multi grid (FMG) and a multi grid (MG) methods have been adapted to solve Poisson's equation for streamer discharge simulations between asymmetric electrodes. The validity of the FMG method for the computation of the potential field is first shown by performing direct comparisons with analytic solution of the Laplacian potential in the case of a point-to-plane geometry. The efficiency of the method is also compared with the classical successive over relaxation method (SOR) and MUltifrontal massively parallel solver (MUMPS). MG method is then applied in the case of the simulation of positive streamer propagation and its efficiency is evaluated from comparisons to SOR and MUMPS methods in the chosen point-to-plane configuration. Very good agreements are obtained between the three methods for all electro-hydrodynamics characteristics of the streamer during its propagation in the inter-electrode gap. However in the case of MG method, the computational time to solve the Poisson's equation is at least 2 times faster in our simulation conditions.
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1986-01-01
Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.
Berti, Claudio; Gillespie, Dirk; Bardhan, Jaydeep P; Eisenberg, Robert S; Fiegna, Claudio
2012-07-01
Particle-based simulation represents a powerful approach to modeling physical systems in electronics, molecular biology, and chemical physics. Accounting for the interactions occurring among charged particles requires an accurate and efficient solution of Poisson's equation. For a system of discrete charges with inhomogeneous dielectrics, i.e., a system with discontinuities in the permittivity, the boundary element method (BEM) is frequently adopted. It provides the solution of Poisson's equation, accounting for polarization effects due to the discontinuity in the permittivity by computing the induced charges at the dielectric boundaries. In this framework, the total electrostatic potential is then found by superimposing the elemental contributions from both source and induced charges. In this paper, we present a comparison between two BEMs to solve a boundary-integral formulation of Poisson's equation, with emphasis on the BEMs' suitability for particle-based simulations in terms of solution accuracy and computation speed. The two approaches are the collocation and qualocation methods. Collocation is implemented following the induced-charge computation method of D. Boda et al. [J. Chem. Phys. 125, 034901 (2006)]. The qualocation method is described by J. Tausch et al. [IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, 1398 (2001)]. These approaches are studied using both flat and curved surface elements to discretize the dielectric boundary, using two challenging test cases: a dielectric sphere embedded in a different dielectric medium and a toy model of an ion channel. Earlier comparisons of the two BEM approaches did not address curved surface elements or semiatomistic models of ion channels. Our results support the earlier findings that for flat-element calculations, qualocation is always significantly more accurate than collocation. On the other hand, when the dielectric boundary is discretized with curved surface elements, the two methods are essentially equivalent; i.e., they have comparable accuracies for the same number of elements. We find that ions in water--charges embedded in a high-dielectric medium--are harder to compute accurately than charges in a low-dielectric medium.
NASA Astrophysics Data System (ADS)
Wan, Tian
This work is motivated by the lack of fully coupled computational tool that solves successfully the turbulent chemically reacting Navier-Stokes equation, the electron energy conservation equation and the electric current Poisson equation. In the present work, the abovementioned equations are solved in a fully coupled manner using fully implicit parallel GMRES methods. The system of Navier-Stokes equations are solved using a GMRES method with combined Schwarz and ILU(0) preconditioners. The electron energy equation and the electric current Poisson equation are solved using a GMRES method with combined SOR and Jacobi preconditioners. The fully coupled method has also been implemented successfully in an unstructured solver, US3D, and convergence test results were presented. This new method is shown two to five times faster than the original DPLR method. The Poisson solver is validated with analytic test problems. Then, four problems are selected; two of them are computed to explore the possibility of onboard MHD control and power generation, and the other two are simulation of experiments. First, the possibility of onboard reentry shock control by a magnetic field is explored. As part of a previous project, MHD power generation onboard a re-entry vehicle is also simulated. Then, the MHD acceleration experiments conducted at NASA Ames research center are simulated. Lastly, the MHD power generation experiments known as the HVEPS project are simulated. For code validation, the scramjet experiments at University of Queensland are simulated first. The generator section of the HVEPS test facility is computed then. The main conclusion is that the computational tool is accurate for different types of problems and flow conditions, and its accuracy and efficiency are necessary when the flow complexity increases.
Efficiency optimization of a fast Poisson solver in beam dynamics simulation
NASA Astrophysics Data System (ADS)
Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula
2016-01-01
Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.
Mathematical and Numerical Aspects of the Adaptive Fast Multipole Poisson-Boltzmann Solver
Zhang, Bo; Lu, Benzhuo; Cheng, Xiaolin; ...
2013-01-01
This paper summarizes the mathematical and numerical theories and computational elements of the adaptive fast multipole Poisson-Boltzmann (AFMPB) solver. We introduce and discuss the following components in order: the Poisson-Boltzmann model, boundary integral equation reformulation, surface mesh generation, the nodepatch discretization approach, Krylov iterative methods, the new version of fast multipole methods (FMMs), and a dynamic prioritization technique for scheduling parallel operations. For each component, we also remark on feasible approaches for further improvements in efficiency, accuracy and applicability of the AFMPB solver to large-scale long-time molecular dynamics simulations. Lastly, the potential of the solver is demonstrated with preliminary numericalmore » results.« less
Botello-Smith, Wesley M.; Luo, Ray
2016-01-01
Continuum solvent models have been widely used in biomolecular modeling applications. Recently much attention has been given to inclusion of implicit membrane into existing continuum Poisson-Boltzmann solvent models to extend their applications to membrane systems. Inclusion of an implicit membrane complicates numerical solutions of the underlining Poisson-Boltzmann equation due to the dielectric inhomogeneity on the boundary surfaces of a computation grid. This can be alleviated by the use of the periodic boundary condition, a common practice in electrostatic computations in particle simulations. The conjugate gradient and successive over-relaxation methods are relatively straightforward to be adapted to periodic calculations, but their convergence rates are quite low, limiting their applications to free energy simulations that require a large number of conformations to be processed. To accelerate convergence, the Incomplete Cholesky preconditioning and the geometric multi-grid methods have been extended to incorporate periodicity for biomolecular applications. Impressive convergence behaviors were found as in the previous applications of these numerical methods to tested biomolecules and MMPBSA calculations. PMID:26389966
Semi-Lagrangian particle methods for high-dimensional Vlasov-Poisson systems
NASA Astrophysics Data System (ADS)
Cottet, Georges-Henri
2018-07-01
This paper deals with the implementation of high order semi-Lagrangian particle methods to handle high dimensional Vlasov-Poisson systems. It is based on recent developments in the numerical analysis of particle methods and the paper focuses on specific algorithmic features to handle large dimensions. The methods are tested with uniform particle distributions in particular against a recent multi-resolution wavelet based method on a 4D plasma instability case and a 6D gravitational case. Conservation properties, accuracy and computational costs are monitored. The excellent accuracy/cost trade-off shown by the method opens new perspective for accurate simulations of high dimensional kinetic equations by particle methods.
pK(A) in proteins solving the Poisson-Boltzmann equation with finite elements.
Sakalli, Ilkay; Knapp, Ernst-Walter
2015-11-05
Knowledge on pK(A) values is an eminent factor to understand the function of proteins in living systems. We present a novel approach demonstrating that the finite element (FE) method of solving the linearized Poisson-Boltzmann equation (lPBE) can successfully be used to compute pK(A) values in proteins with high accuracy as a possible replacement to finite difference (FD) method. For this purpose, we implemented the software molecular Finite Element Solver (mFES) in the framework of the Karlsberg+ program to compute pK(A) values. This work focuses on a comparison between pK(A) computations obtained with the well-established FD method and with the new developed FE method mFES, solving the lPBE using protein crystal structures without conformational changes. Accurate and coarse model systems are set up with mFES using a similar number of unknowns compared with the FD method. Our FE method delivers results for computations of pK(A) values and interaction energies of titratable groups, which are comparable in accuracy. We introduce different thermodynamic cycles to evaluate pK(A) values and we show for the FE method how different parameters influence the accuracy of computed pK(A) values. © 2015 Wiley Periodicals, Inc.
The solution of large multi-dimensional Poisson problems
NASA Technical Reports Server (NTRS)
Stone, H. S.
1974-01-01
The Buneman algorithm for solving Poisson problems can be adapted to solve large Poisson problems on computers with a rotating drum memory so that the computation is done with very little time lost due to rotational latency of the drum.
An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-09-01
Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1980-01-01
A method for generating two dimensional finite difference grids about airfoils and other shapes by the use of the Poisson differential equation is developed. The inhomogeneous terms are automatically chosen such that two important effects are imposed on the grid at both the inner and outer boundaries. The first effect is control of the spacing between mesh points along mesh lines intersecting the boundaries. The second effect is control of the angles with which mesh lines intersect the boundaries. A FORTRAN computer program has been written to use this method. A description of the program, a discussion of the control parameters, and a set of sample cases are included.
Algorithm Calculates Cumulative Poisson Distribution
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.
1992-01-01
Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).
Zeroth Poisson Homology, Foliated Cohomology and Perfect Poisson Manifolds
NASA Astrophysics Data System (ADS)
Martínez-Torres, David; Miranda, Eva
2018-01-01
We prove that, for compact regular Poisson manifolds, the zeroth homology group is isomorphic to the top foliated cohomology group, and we give some applications. In particular, we show that, for regular unimodular Poisson manifolds, top Poisson and foliated cohomology groups are isomorphic. Inspired by the symplectic setting, we define what a perfect Poisson manifold is. We use these Poisson homology computations to provide families of perfect Poisson manifolds.
A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.
Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S
2016-01-07
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.
A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisicaro, G., E-mail: giuseppe.fisicaro@unibas.ch; Goedecker, S.; Genovese, L.
2016-01-07
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and themore » linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.« less
A coarse-grid-projection acceleration method for finite-element incompressible flow computations
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne; FiN Lab Team
2015-11-01
Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.
A note on an attempt at more efficient Poisson series evaluation. [for lunar libration
NASA Technical Reports Server (NTRS)
Shelus, P. J.; Jefferys, W. H., III
1975-01-01
A substantial reduction has been achieved in the time necessary to compute lunar libration series. The method involves eliminating many of the trigonometric function calls by a suitable transformation and applying a short SNOBOL processor to the FORTRAN coding of the transformed series, which obviates many of the multiplication operations during the course of series evaluation. It is possible to accomplish similar results quite easily with other Poisson series.
2011-11-01
the Poisson form of the equations can also be generated by manipulating the computational space , so forcing functions become superfluous . The...ABSTRACT Unstructured methods for region discretization have become common in computational fluid dynamics (CFD) analysis because of certain benefits...application of Winslow elliptic smoothing equations to unstructured meshes. It has been shown that it is not necessary for the computational space of
Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method.
Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng
2016-01-01
In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC.
Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method
Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng
2016-01-01
In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC. PMID:28005929
Wang, Wansheng; Chen, Long; Zhou, Jie
2015-01-01
A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063
Calculated and measured fields in superferric wiggler magnets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blum, E.B.; Solomon, L.
1995-02-01
Although Klaus Halbach is widely known and appreciated as the originator of the computer program POISSON for electromagnetic field calculation, Klaus has always believed that analytical methods can give much more insight into the performance of a magnet than numerical simulation. Analytical approximations readily show how the different aspects of a magnet`s design such as pole dimensions, current, and coil configuration contribute to the performance. These methods yield accuracies of better than 10%. Analytical methods should therefore be used when conceptualizing a magnet design. Computer analysis can then be used for refinement. A simple model is presented for the peakmore » on-axis field of an electro-magnetic wiggler with iron poles and superconducting coils. The model is applied to the radiator section of the superconducting wiggler for the BNL Harmonic Generation Free Electron Laser. The predictions of the model are compared to the measured field and the results from POISSON.« less
Detecting isotopic ratio outliers
NASA Astrophysics Data System (ADS)
Bayne, C. K.; Smith, D. H.
An alternative method is proposed for improving isotopic ratio estimates. This method mathematically models pulse-count data and uses iterative reweighted Poisson regression to estimate model parameters to calculate the isotopic ratios. This computer-oriented approach provides theoretically better methods than conventional techniques to establish error limits and to identify outliers.
Probabilistic structural analysis methods for improving Space Shuttle engine reliability
NASA Technical Reports Server (NTRS)
Boyce, L.
1989-01-01
Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.
Accurate solution of the Poisson equation with discontinuities
NASA Astrophysics Data System (ADS)
Nave, Jean-Christophe; Marques, Alexandre; Rosales, Rodolfo
2017-11-01
Solving the Poisson equation in the presence of discontinuities is of great importance in many applications of science and engineering. In many cases, the discontinuities are caused by interfaces between different media, such as in multiphase flows. These interfaces are themselves solutions to differential equations, and can assume complex configurations. For this reason, it is convenient to embed the interface into a regular triangulation or Cartesian grid and solve the Poisson equation in this regular domain. We present an extension of the Correction Function Method (CFM), which was developed to solve the Poisson equation in the context of embedded interfaces. The distinctive feature of the CFM is that it uses partial differential equations to construct smooth extensions of the solution in the vicinity of interfaces. A consequence of this approach is that it can achieve high order of accuracy while maintaining compact discretizations. The extension we present removes the restrictions of the original CFM, and yields a method that can solve the Poisson equation when discontinuities are present in the solution, the coefficients of the equation (material properties), and the source term. We show results computed to fourth order of accuracy in two and three dimensions. This work was partially funded by DARPA, NSF, and NSERC.
Differential expression analysis for RNAseq using Poisson mixed models
Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny
2017-01-01
Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632
Rapid computation of directional wellbore drawdown in a confined aquifer via Poisson resummation
NASA Astrophysics Data System (ADS)
Blumenthal, Benjamin J.; Zhan, Hongbin
2016-08-01
We have derived a rapidly computed analytical solution for drawdown caused by a partially or fully penetrating directional wellbore (vertical, horizontal, or slant) via Green's function method. The mathematical model assumes an anisotropic, homogeneous, confined, box-shaped aquifer. Any dimension of the box can have one of six possible boundary conditions: 1) both sides no-flux; 2) one side no-flux - one side constant-head; 3) both sides constant-head; 4) one side no-flux; 5) one side constant-head; 6) free boundary conditions. The solution has been optimized for rapid computation via Poisson Resummation, derivation of convergence rates, and numerical optimization of integration techniques. Upon application of the Poisson Resummation method, we were able to derive two sets of solutions with inverse convergence rates, namely an early-time rapidly convergent series (solution-A) and a late-time rapidly convergent series (solution-B). From this work we were able to link Green's function method (solution-B) back to image well theory (solution-A). We then derived an equation defining when the convergence rate between solution-A and solution-B is the same, which we termed the switch time. Utilizing the more rapidly convergent solution at the appropriate time, we obtained rapid convergence at all times. We have also shown that one may simplify each of the three infinite series for the three-dimensional solution to 11 terms and still maintain a maximum relative error of less than 10-14.
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Statistical error in simulations of Poisson processes: Example of diffusion in solids
NASA Astrophysics Data System (ADS)
Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.
2016-08-01
Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.
Evolutionary inference via the Poisson Indel Process.
Bouchard-Côté, Alexandre; Jordan, Michael I
2013-01-22
We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.
Evolutionary inference via the Poisson Indel Process
Bouchard-Côté, Alexandre; Jordan, Michael I.
2013-01-01
We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114–124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments. PMID:23275296
A poisson process model for hip fracture risk.
Schechner, Zvi; Luo, Gangming; Kaufman, Jonathan J; Siffert, Robert S
2010-08-01
The primary method for assessing fracture risk in osteoporosis relies primarily on measurement of bone mass. Estimation of fracture risk is most often evaluated using logistic or proportional hazards models. Notwithstanding the success of these models, there is still much uncertainty as to who will or will not suffer a fracture. This has led to a search for other components besides mass that affect bone strength. The purpose of this paper is to introduce a new mechanistic stochastic model that characterizes the risk of hip fracture in an individual. A Poisson process is used to model the occurrence of falls, which are assumed to occur at a rate, lambda. The load induced by a fall is assumed to be a random variable that has a Weibull probability distribution. The combination of falls together with loads leads to a compound Poisson process. By retaining only those occurrences of the compound Poisson process that result in a hip fracture, a thinned Poisson process is defined that itself is a Poisson process. The fall rate is modeled as an affine function of age, and hip strength is modeled as a power law function of bone mineral density (BMD). The risk of hip fracture can then be computed as a function of age and BMD. By extending the analysis to a Bayesian framework, the conditional densities of BMD given a prior fracture and no prior fracture can be computed and shown to be consistent with clinical observations. In addition, the conditional probabilities of fracture given a prior fracture and no prior fracture can also be computed, and also demonstrate results similar to clinical data. The model elucidates the fact that the hip fracture process is inherently random and improvements in hip strength estimation over and above that provided by BMD operate in a highly "noisy" environment and may therefore have little ability to impact clinical practice.
A stochastic-dynamic model for global atmospheric mass field statistics
NASA Technical Reports Server (NTRS)
Ghil, M.; Balgovind, R.; Kalnay-Rivas, E.
1981-01-01
A model that yields the spatial correlation structure of atmospheric mass field forecast errors was developed. The model is governed by the potential vorticity equation forced by random noise. Expansion in spherical harmonics and correlation function was computed analytically using the expansion coefficients. The finite difference equivalent was solved using a fast Poisson solver and the correlation function was computed using stratified sampling of the individual realization of F(omega) and hence of phi(omega). A higher order equation for gamma was derived and solved directly in finite differences by two successive applications of the fast Poisson solver. The methods were compared for accuracy and efficiency and the third method was chosen as clearly superior. The results agree well with the latitude dependence of observed atmospheric correlation data. The value of the parameter c sub o which gives the best fit to the data is close to the value expected from dynamical considerations.
Kadam, Shantanu; Vanka, Kumar
2013-02-15
Methods based on the stochastic formulation of chemical kinetics have the potential to accurately reproduce the dynamical behavior of various biochemical systems of interest. However, the computational expense makes them impractical for the study of real systems. Attempts to render these methods practical have led to the development of accelerated methods, where the reaction numbers are modeled by Poisson random numbers. However, for certain systems, such methods give rise to physically unrealistic negative numbers for species populations. The methods which make use of binomial variables, in place of Poisson random numbers, have since become popular, and have been partially successful in addressing this problem. In this manuscript, the development of two new computational methods, based on the representative reaction approach (RRA), has been discussed. The new methods endeavor to solve the problem of negative numbers, by making use of tools like the stochastic simulation algorithm and the binomial method, in conjunction with the RRA. It is found that these newly developed methods perform better than other binomial methods used for stochastic simulations, in resolving the problem of negative populations. Copyright © 2012 Wiley Periodicals, Inc.
Fast, adaptive summation of point forces in the two-dimensional Poisson equation
NASA Technical Reports Server (NTRS)
Van Dommelen, Leon; Rundensteiner, Elke A.
1989-01-01
A comparatively simple procedure is presented for the direct summation of the velocity field introduced by point vortices which significantly reduces the required number of operations by replacing selected partial sums by asymptotic series. Tables are presented which demonstrate the speed of this algorithm in terms of the mere doubling of computational time in dealing with a doubling of the number of vortices; current methods involve a computational time extension by a factor of 4. This procedure need not be restricted to the solution of the Poisson equation, and may be applied to other problems involving groups of points in which the interaction between elements of different groups can be simplified when the distance between groups is sufficiently great.
Li, Xian-Ying; Hu, Shi-Min
2013-02-01
Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.
Differential expression analysis for RNAseq using Poisson mixed models.
Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang
2017-06-20
Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2016-12-01
We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Filipponi, A; Di Cicco, A; Principi, E
2012-12-01
A Bayesian data-analysis approach to data sets of maximum undercooling temperatures recorded in repeated melting-cooling cycles of high-purity samples is proposed. The crystallization phenomenon is described in terms of a nonhomogeneous Poisson process driven by a temperature-dependent sample nucleation rate J(T). The method was extensively tested by computer simulations and applied to real data for undercooled liquid Ge. It proved to be particularly useful in the case of scarce data sets where the usage of binned data would degrade the available experimental information.
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xingyu; Samulyak, Roman, E-mail: roman.samulyak@stonybrook.edu; Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
An efficient three-dimensional Poisson solver for SIMD high-performance-computing architectures
NASA Technical Reports Server (NTRS)
Cohl, H.
1994-01-01
We present an algorithm that solves the three-dimensional Poisson equation on a cylindrical grid. The technique uses a finite-difference scheme with operator splitting. This splitting maps the banded structure of the operator matrix into a two-dimensional set of tridiagonal matrices, which are then solved in parallel. Our algorithm couples FFT techniques with the well-known ADI (Alternating Direction Implicit) method for solving Elliptic PDE's, and the implementation is extremely well suited for a massively parallel environment like the SIMD architecture of the MasPar MP-1. Due to the highly recursive nature of our problem, we believe that our method is highly efficient, as it avoids excessive interprocessor communication.
NASA Astrophysics Data System (ADS)
Camporeale, E.; Delzanno, G. L.; Bergen, B. K.; Moulton, J. D.
2016-01-01
We describe a spectral method for the numerical solution of the Vlasov-Poisson system where the velocity space is decomposed by means of an Hermite basis, and the configuration space is discretized via a Fourier decomposition. The novelty of our approach is an implicit time discretization that allows exact conservation of charge, momentum and energy. The computational efficiency and the cost-effectiveness of this method are compared to the fully-implicit PIC method recently introduced by Markidis and Lapenta (2011) and Chen et al. (2011). The following examples are discussed: Langmuir wave, Landau damping, ion-acoustic wave, two-stream instability. The Fourier-Hermite spectral method can achieve solutions that are several orders of magnitude more accurate at a fraction of the cost with respect to PIC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biswas, Debabrata; Singh, Gaurav; Kumar, Raghwendra
2015-09-15
Numerical solution of the Poisson equation in metallic enclosures, open at one or more ends, is important in many practical situations, such as high power microwave or photo-cathode devices. It requires imposition of a suitable boundary condition at the open end. In this paper, methods for solving the Poisson equation are investigated for various charge densities and aspect ratios of the open ends. It is found that a mixture of second order and third order local asymptotic boundary conditions is best suited for large aspect ratios, while a proposed non-local matching method, based on the solution of the Laplace equation,more » scores well when the aspect ratio is near unity for all charge density variations, including ones where the centre of charge is close to an open end or the charge density is non-localized. The two methods complement each other and can be used in electrostatic calculations where the computational domain needs to be terminated at the open boundaries of the metallic enclosure.« less
Clinical characterization of 2D pressure field in human left ventricles
NASA Astrophysics Data System (ADS)
Borja, Maria; Rossini, Lorenzo; Martinez-Legazpi, Pablo; Benito, Yolanda; Alhama, Marta; Yotti, Raquel; Perez Del Villar, Candelas; Gonzalez-Mansilla, Ana; Barrio, Alicia; Fernandez-Aviles, Francisco; Bermejo, Javier; Khan, Andrew; Del Alamo, Juan Carlos
2014-11-01
The evaluation of left ventricle (LV) function in the clinical setting remains a challenge. Pressure gradient is a reliable and reproducible indicator of the LV function. We obtain 2D relative pressure field in the LV using in-vivo measurements obtained by processing Doppler-echocardiography images of healthy and dilated hearts. Exploiting mass conservation, we solve the Poisson pressure equation (PPE) dropping the time derivatives and viscous terms. The flow acceleration appears only in the boundary conditions, making our method weakly sensible to the time resolution of in-vivo acquisitions. To ensure continuity with respect to the discrete operator and grid used, a potential flow correction is applied beforehand, which gives another Poisson equation. The new incompressible velocity field ensures that the compatibility equation for the PPE is satisfied. Both Poisson equations are efficiently solved on a Cartesian grid using a multi-grid method and immersed boundary for the LV wall. The whole process is computationally inexpensive and could play a diagnostic role in the clinical assessment of LV function.
Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai
2016-06-10
Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.
An efficient numerical technique for calculating thermal spreading resistance
NASA Technical Reports Server (NTRS)
Gale, E. H., Jr.
1977-01-01
An efficient numerical technique for solving the equations resulting from finite difference analyses of fields governed by Poisson's equation is presented. The method is direct (noniterative)and the computer work required varies with the square of the order of the coefficient matrix. The computational work required varies with the cube of this order for standard inversion techniques, e.g., Gaussian elimination, Jordan, Doolittle, etc.
Recent Developments in Computational Techniques for Applied Hydrodynamics.
1979-12-07
by block number) Numerical Method Fluids Incompressible Flow Finite Difference Methods Poisson Equation Convective Equations -MABSTRACT (Continue on...weaknesses of the different approaches are analyzed. Finite - difference techniques have particularly attractive properties in this framework. Hence it will...be worthwhile to correct, at least partially, the difficulties from which Eulerian and Lagrangian finite - difference techniques suffer, discussed in
GRAPE- TWO-DIMENSIONAL GRIDS ABOUT AIRFOILS AND OTHER SHAPES BY THE USE OF POISSON'S EQUATION
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1994-01-01
The ability to treat arbitrary boundary shapes is one of the most desirable characteristics of a method for generating grids, including those about airfoils. In a grid used for computing aerodynamic flow over an airfoil, or any other body shape, the surface of the body is usually treated as an inner boundary and often cannot be easily represented as an analytic function. The GRAPE computer program was developed to incorporate a method for generating two-dimensional finite-difference grids about airfoils and other shapes by the use of the Poisson differential equation. GRAPE can be used with any boundary shape, even one specified by tabulated points and including a limited number of sharp corners. The GRAPE program has been developed to be numerically stable and computationally fast. GRAPE can provide the aerodynamic analyst with an efficient and consistent means of grid generation. The GRAPE procedure generates a grid between an inner and an outer boundary by utilizing an iterative procedure to solve the Poisson differential equation subject to geometrical restraints. In this method, the inhomogeneous terms of the equation are automatically chosen such that two important effects are imposed on the grid. The first effect is control of the spacing between mesh points along mesh lines intersecting the boundaries. The second effect is control of the angles with which mesh lines intersect the boundaries. Along with the iterative solution to Poisson's equation, a technique of coarse-fine sequencing is employed to accelerate numerical convergence. GRAPE program control cards and input data are entered via the NAMELIST feature. Each variable has a default value such that user supplied data is kept to a minimum. Basic input data consists of the boundary specification, mesh point spacings on the boundaries, and mesh line angles at the boundaries. Output consists of a dataset containing the grid data and, if requested, a plot of the generated mesh. The GRAPE program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 135K (octal) of 60 bit words. For plotted output the commercially available DISSPLA graphics software package is required. The GRAPE program was developed in 1980.
NASA Astrophysics Data System (ADS)
Sun, Deyu; Rettmann, Maryam E.; Holmes, David R.; Linte, Cristian A.; Packer, Douglas; Robb, Richard A.
2014-03-01
In this work, we propose a method for intraoperative reconstruction of a left atrial surface model for the application of cardiac ablation therapy. In this approach, the intraoperative point cloud is acquired by a tracked, 2D freehand intra-cardiac echocardiography device, which is registered and merged with a preoperative, high resolution left atrial surface model built from computed tomography data. For the surface reconstruction, we introduce a novel method to estimate the normal vector of the point cloud from the preoperative left atrial model, which is required for the Poisson Equation Reconstruction algorithm. In the current work, the algorithm is evaluated using a preoperative surface model from patient computed tomography data and simulated intraoperative ultrasound data. Factors such as intraoperative deformation of the left atrium, proportion of the left atrial surface sampled by the ultrasound, sampling resolution, sampling noise, and registration error were considered through a series of simulation experiments.
Numerical solution of the Hele-Shaw equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitaker, N.
1987-04-01
An algorithm is presented for approximating the motion of the interface between two immiscible fluids in a Hele-Shaw cell. The interface is represented by a set of volume fractions. We use the Simple Line Interface Calculation method along with the method of fractional steps to transport the interface. The equation of continuity leads to a Poisson equation for the pressure. The Poisson equation is discretized. Near the interface where the velocity field is discontinuous, the discretization is based on a weak formulation of the continuity equation. Interpolation is used on each side of the interface to increase the accuracy ofmore » the algorithm. The weak formulation as well as the interpolation are based on the computed volume fractions. This treatment of the interface is new. The discretized equations are solved by a modified conjugate gradient method. Surface tension is included and the curvature is computed through the use of osculating circles. For perturbations of small amplitude, a surprisingly good agreement is found between the numerical results and linearized perturbation theory. Numerical results are presented for the finite amplitude growth of unstable fingers. 62 refs., 13 figs.« less
Poisson Regression Analysis of Illness and Injury Surveillance Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frome E.L., Watkins J.P., Ellis E.D.
2012-12-12
The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences duemore » to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson variation. The R open source software environment for statistical computing and graphics is used for analysis. Additional details about R and the data that were used in this report are provided in an Appendix. Information on how to obtain R and utility functions that can be used to duplicate results in this report are provided.« less
FIND: difFerential chromatin INteractions Detection using a spatial Poisson process
Chen, Yang; Zhang, Michael Q.
2018-01-01
Polymer-based simulations and experimental studies indicate the existence of a spatial dependency between the adjacent DNA fibers involved in the formation of chromatin loops. However, the existing strategies for detecting differential chromatin interactions assume that the interacting segments are spatially independent from the other segments nearby. To resolve this issue, we developed a new computational method, FIND, which considers the local spatial dependency between interacting loci. FIND uses a spatial Poisson process to detect differential chromatin interactions that show a significant difference in their interaction frequency and the interaction frequency of their neighbors. Simulation and biological data analysis show that FIND outperforms the widely used count-based methods and has a better signal-to-noise ratio. PMID:29440282
Scaling the Poisson Distribution
ERIC Educational Resources Information Center
Farnsworth, David L.
2014-01-01
We derive the additive property of Poisson random variables directly from the probability mass function. An important application of the additive property to quality testing of computer chips is presented.
Green's function enriched Poisson solver for electrostatics in many-particle systems
NASA Astrophysics Data System (ADS)
Sutmann, Godehard
2016-06-01
A highly accurate method is presented for the construction of the charge density for the solution of the Poisson equation in particle simulations. The method is based on an operator adjusted source term which can be shown to produce exact results up to numerical precision in the case of a large support of the charge distribution, therefore compensating the discretization error of finite difference schemes. This is achieved by balancing an exact representation of the known Green's function of regularized electrostatic problem with a discretized representation of the Laplace operator. It is shown that the exact calculation of the potential is possible independent of the order of the finite difference scheme but the computational efficiency for higher order methods is found to be superior due to a faster convergence to the exact result as a function of the charge support.
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.
de Nijs, Robin
2015-07-21
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
Numerical solutions for patterns statistics on Markov chains.
Nuel, Gregory
2006-01-01
We propose here a review of the methods available to compute pattern statistics on text generated by a Markov source. Theoretical, but also numerical aspects are detailed for a wide range of techniques (exact, Gaussian, large deviations, binomial and compound Poisson). The SPatt package (Statistics for Pattern, free software available at http://stat.genopole.cnrs.fr/spatt) implementing all these methods is then used to compare all these approaches in terms of computational time and reliability in the most complete pattern statistics benchmark available at the present time.
Chen, Junning; Suenaga, Hanako; Hogg, Michael; Li, Wei; Swain, Michael; Li, Qing
2016-01-01
Despite their considerable importance to biomechanics, there are no existing methods available to directly measure apparent Poisson's ratio and friction coefficient of oral mucosa. This study aimed to develop an inverse procedure to determine these two biomechanical parameters by utilizing in vivo experiment of contact pressure between partial denture and beneath mucosa through nonlinear finite element (FE) analysis and surrogate response surface (RS) modelling technique. First, the in vivo denture-mucosa contact pressure was measured by a tactile electronic sensing sheet. Second, a 3D FE model was constructed based on the patient CT images. Third, a range of apparent Poisson's ratios and the coefficients of friction from literature was considered as the design variables in a series of FE runs for constructing a RS surrogate model. Finally, the discrepancy between computed in silico and measured in vivo results was minimized to identify the best matching Poisson's ratio and coefficient of friction. The established non-invasive methodology was demonstrated effective to identify such biomechanical parameters of oral mucosa and can be potentially used for determining the biomaterial properties of other soft biological tissues.
Multiscale modeling of a rectifying bipolar nanopore: Comparing Poisson-Nernst-Planck to Monte Carlo
NASA Astrophysics Data System (ADS)
Matejczyk, Bartłomiej; Valiskó, Mónika; Wolfram, Marie-Therese; Pietschmann, Jan-Frederik; Boda, Dezső
2017-03-01
In the framework of a multiscale modeling approach, we present a systematic study of a bipolar rectifying nanopore using a continuum and a particle simulation method. The common ground in the two methods is the application of the Nernst-Planck (NP) equation to compute ion transport in the framework of the implicit-water electrolyte model. The difference is that the Poisson-Boltzmann theory is used in the Poisson-Nernst-Planck (PNP) approach, while the Local Equilibrium Monte Carlo (LEMC) method is used in the particle simulation approach (NP+LEMC) to relate the concentration profile to the electrochemical potential profile. Since we consider a bipolar pore which is short and narrow, we perform simulations using two-dimensional PNP. In addition, results of a non-linear version of PNP that takes crowding of ions into account are shown. We observe that the mean field approximation applied in PNP is appropriate to reproduce the basic behavior of the bipolar nanopore (e.g., rectification) for varying parameters of the system (voltage, surface charge, electrolyte concentration, and pore radius). We present current data that characterize the nanopore's behavior as a device, as well as concentration, electrical potential, and electrochemical potential profiles.
Matejczyk, Bartłomiej; Valiskó, Mónika; Wolfram, Marie-Therese; Pietschmann, Jan-Frederik; Boda, Dezső
2017-03-28
In the framework of a multiscale modeling approach, we present a systematic study of a bipolar rectifying nanopore using a continuum and a particle simulation method. The common ground in the two methods is the application of the Nernst-Planck (NP) equation to compute ion transport in the framework of the implicit-water electrolytemodel. The difference is that the Poisson-Boltzmann theory is used in the Poisson-Nernst-Planck (PNP) approach, while the Local Equilibrium Monte Carlo (LEMC) method is used in the particle simulation approach (NP+LEMC) to relate the concentration profile to the electrochemical potential profile. Since we consider a bipolar pore which is short and narrow, we perform simulations using two-dimensional PNP. In addition, results of a non-linear version of PNP that takes crowding of ions into account are shown. We observe that the mean field approximation applied in PNP is appropriate to reproduce the basic behavior of the bipolar nanopore (e.g., rectification) for varying parameters of the system (voltage, surface charge,electrolyte concentration, and pore radius). We present current data that characterize the nanopore's behavior as a device, as well as concentration, electrical potential, and electrochemical potential profiles.
FIND: difFerential chromatin INteractions Detection using a spatial Poisson process.
Djekidel, Mohamed Nadhir; Chen, Yang; Zhang, Michael Q
2018-02-12
Polymer-based simulations and experimental studies indicate the existence of a spatial dependency between the adjacent DNA fibers involved in the formation of chromatin loops. However, the existing strategies for detecting differential chromatin interactions assume that the interacting segments are spatially independent from the other segments nearby. To resolve this issue, we developed a new computational method, FIND, which considers the local spatial dependency between interacting loci. FIND uses a spatial Poisson process to detect differential chromatin interactions that show a significant difference in their interaction frequency and the interaction frequency of their neighbors. Simulation and biological data analysis show that FIND outperforms the widely used count-based methods and has a better signal-to-noise ratio. © 2018 Djekidel et al.; Published by Cold Spring Harbor Laboratory Press.
A 2D electrostatic PIC code for the Mark III Hypercube
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferraro, R.D.; Liewer, P.C.; Decyk, V.K.
We have implemented a 2D electrostastic plasma particle in cell (PIC) simulation code on the Caltech/JPL Mark IIIfp Hypercube. The code simulates plasma effects by evolving in time the trajectories of thousands to millions of charged particles subject to their self-consistent fields. Each particle`s position and velocity is advanced in time using a leap frog method for integrating Newton`s equations of motion in electric and magnetic fields. The electric field due to these moving charged particles is calculated on a spatial grid at each time by solving Poisson`s equation in Fourier space. These two tasks represent the largest part ofmore » the computation. To obtain efficient operation on a distributed memory parallel computer, we are using the General Concurrent PIC (GCPIC) algorithm previously developed for a 1D parallel PIC code.« less
Cumulative Poisson Distribution Program
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert
1990-01-01
Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.
Computational prediction of new auxetic materials.
Dagdelen, John; Montoya, Joseph; de Jong, Maarten; Persson, Kristin
2017-08-22
Auxetics comprise a rare family of materials that manifest negative Poisson's ratio, which causes an expansion instead of contraction under tension. Most known homogeneously auxetic materials are porous foams or artificial macrostructures and there are few examples of inorganic materials that exhibit this behavior as polycrystalline solids. It is now possible to accelerate the discovery of materials with target properties, such as auxetics, using high-throughput computations, open databases, and efficient search algorithms. Candidates exhibiting features correlating with auxetic behavior were chosen from the set of more than 67 000 materials in the Materials Project database. Poisson's ratios were derived from the calculated elastic tensor of each material in this reduced set of compounds. We report that this strategy results in the prediction of three previously unidentified homogeneously auxetic materials as well as a number of compounds with a near-zero homogeneous Poisson's ratio, which are here denoted "anepirretic materials".There are very few inorganic materials with auxetic homogenous Poisson's ratio in polycrystalline form. Here authors develop an approach to screening materials databases for target properties such as negative Poisson's ratio by using stability and structural motifs to predict new instances of homogenous auxetic behavior as well as a number of materials with near-zero Poisson's ratio.
NASA Astrophysics Data System (ADS)
Geng, Weihua; Zhao, Shan
2017-12-01
We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.
Method for resonant measurement
Rhodes, G.W.; Migliori, A.; Dixon, R.D.
1996-03-05
A method of measurement of objects to determine object flaws, Poisson`s ratio ({sigma}) and shear modulus ({mu}) is shown and described. First, the frequency for expected degenerate responses is determined for one or more input frequencies and then splitting of degenerate resonant modes are observed to identify the presence of flaws in the object. Poisson`s ratio and the shear modulus can be determined by identification of resonances dependent only on the shear modulus, and then using that shear modulus to find Poisson`s ratio using other modes dependent on both the shear modulus and Poisson`s ratio. 1 fig.
Algorithms for parallel and vector computations
NASA Technical Reports Server (NTRS)
Ortega, James M.
1995-01-01
This is a final report on work performed under NASA grant NAG-1-1112-FOP during the period March, 1990 through February 1995. Four major topics are covered: (1) solution of nonlinear poisson-type equations; (2) parallel reduced system conjugate gradient method; (3) orderings for conjugate gradient preconditioners, and (4) SOR as a preconditioner.
Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction
NASA Technical Reports Server (NTRS)
Cohen, A. C.
1971-01-01
A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.
Gridless particle technique for the Vlasov-Poisson system in problems with high degree of symmetry
NASA Astrophysics Data System (ADS)
Boella, E.; Coppa, G.; D'Angola, A.; Peiretti Paradisi, B.
2018-03-01
In the paper, gridless particle techniques are presented in order to solve problems involving electrostatic, collisionless plasmas. The method makes use of computational particles having the shape of spherical shells or of rings, and can be used to study cases in which the plasma has spherical or axial symmetry, respectively. As a computational grid is absent, the technique is particularly suitable when the plasma occupies a rapidly changing space region.
Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward
2013-09-01
Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2015-04-05
The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Reese, O. W.
1972-01-01
The numerical calculation is described of the steady-state flow of electrons in an axisymmetric, spherical, electrostatic collector for a range of boundary conditions. The trajectory equations of motion are solved alternately with Poisson's equation for the potential field until convergence is achieved. A direct (noniterative) numerical technique is used to obtain the solution to Poisson's equation. Space charge effects are included for initial current densities as large as 100 A/sq cm. Ways of dealing successfully with the difficulties associated with these high densities are discussed. A description of the mathematical model, a discussion of numerical techniques, results from two typical runs, and the FORTRAN computer program are included.
Simulation Methods for Poisson Processes in Nonstationary Systems.
1978-08-01
for simulation of nonhomogeneous Poisson processes is stated with log-linear rate function. The method is based on an identity relating the...and relatively efficient new method for simulation of one-dimensional and two-dimensional nonhomogeneous Poisson processes is described. The method is
Blind beam-hardening correction from Poisson measurements
NASA Astrophysics Data System (ADS)
Gu, Renliang; Dogandžić, Aleksandar
2016-02-01
We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Da; Zheng, Bin; Lin, Guang
2014-08-29
We have developed efficient numerical algorithms for the solution of 3D steady-state Poisson-Nernst-Planck equations (PNP) with excess chemical potentials described by the classical density functional theory (cDFT). The coupled PNP equations are discretized by finite difference scheme and solved iteratively by Gummel method with relaxation. The Nernst-Planck equations are transformed into Laplace equations through the Slotboom transformation. Algebraic multigrid method is then applied to efficiently solve the Poisson equation and the transformed Nernst-Planck equations. A novel strategy for calculating excess chemical potentials through fast Fourier transforms is proposed which reduces computational complexity from O(N2) to O(NlogN) where N is themore » number of grid points. Integrals involving Dirac delta function are evaluated directly by coordinate transformation which yields more accurate result compared to applying numerical quadrature to an approximated delta function. Numerical results for ion and electron transport in solid electrolyte for Li ion batteries are shown to be in good agreement with the experimental data and the results from previous studies.« less
AQUASOL: An efficient solver for the dipolar Poisson-Boltzmann-Langevin equation.
Koehl, Patrice; Delarue, Marc
2010-02-14
The Poisson-Boltzmann (PB) formalism is among the most popular approaches to modeling the solvation of molecules. It assumes a continuum model for water, leading to a dielectric permittivity that only depends on position in space. In contrast, the dipolar Poisson-Boltzmann-Langevin (DPBL) formalism represents the solvent as a collection of orientable dipoles with nonuniform concentration; this leads to a nonlinear permittivity function that depends both on the position and on the local electric field at that position. The differences in the assumptions underlying these two models lead to significant differences in the equations they generate. The PB equation is a second order, elliptic, nonlinear partial differential equation (PDE). Its response coefficients correspond to the dielectric permittivity and are therefore constant within each subdomain of the system considered (i.e., inside and outside of the molecules considered). While the DPBL equation is also a second order, elliptic, nonlinear PDE, its response coefficients are nonlinear functions of the electrostatic potential. Many solvers have been developed for the PB equation; to our knowledge, none of these can be directly applied to the DPBL equation. The methods they use may adapt to the difference; their implementations however are PBE specific. We adapted the PBE solver originally developed by Holst and Saied [J. Comput. Chem. 16, 337 (1995)] to the problem of solving the DPBL equation. This solver uses a truncated Newton method with a multigrid preconditioner. Numerical evidences suggest that it converges for the DPBL equation and that the convergence is superlinear. It is found however to be slow and greedy in memory requirement for problems commonly encountered in computational biology and computational chemistry. To circumvent these problems, we propose two variants, a quasi-Newton solver based on a simplified, inexact Jacobian and an iterative self-consistent solver that is based directly on the PBE solver. While both methods are not guaranteed to converge, numerical evidences suggest that they do and that their convergence is also superlinear. Both variants are significantly faster than the solver based on the exact Jacobian, with a much smaller memory footprint. All three methods have been implemented in a new code named AQUASOL, which is freely available.
Xie, Yang; Ying, Jinyong; Xie, Dexuan
2017-03-30
SMPBS (Size Modified Poisson-Boltzmann Solvers) is a web server for computing biomolecular electrostatics using finite element solvers of the size modified Poisson-Boltzmann equation (SMPBE). SMPBE not only reflects ionic size effects but also includes the classic Poisson-Boltzmann equation (PBE) as a special case. Thus, its web server is expected to have a broader range of applications than a PBE web server. SMPBS is designed with a dynamic, mobile-friendly user interface, and features easily accessible help text, asynchronous data submission, and an interactive, hardware-accelerated molecular visualization viewer based on the 3Dmol.js library. In particular, the viewer allows computed electrostatics to be directly mapped onto an irregular triangular mesh of a molecular surface. Due to this functionality and the fast SMPBE finite element solvers, the web server is very efficient in the calculation and visualization of electrostatics. In addition, SMPBE is reconstructed using a new objective electrostatic free energy, clearly showing that the electrostatics and ionic concentrations predicted by SMPBE are optimal in the sense of minimizing the objective electrostatic free energy. SMPBS is available at the URL: smpbs.math.uwm.edu © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Finite element solution of torsion and other 2-D Poisson equations
NASA Technical Reports Server (NTRS)
Everstine, G. C.
1982-01-01
The NASTRAN structural analysis computer program may be used, without modification, to solve two dimensional Poisson equations such as arise in the classical Saint Venant torsion problem. The nonhomogeneous term (the right-hand side) in the Poisson equation can be handled conveniently by specifying a gravitational load in a "structural" analysis. The use of an analogy between the equations of elasticity and those of classical mathematical physics is summarized in detail.
Modeling laser velocimeter signals as triply stochastic Poisson processes
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.
1976-01-01
Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.
Exact Dynamics via Poisson Process: a unifying Monte Carlo paradigm
NASA Astrophysics Data System (ADS)
Gubernatis, James
2014-03-01
A common computational task is solving a set of ordinary differential equations (o.d.e.'s). A little known theorem says that the solution of any set of o.d.e.'s is exactly solved by the expectation value over a set of arbitary Poisson processes of a particular function of the elements of the matrix that defines the o.d.e.'s. The theorem thus provides a new starting point to develop real and imaginary-time continous-time solvers for quantum Monte Carlo algorithms, and several simple observations enable various quantum Monte Carlo techniques and variance reduction methods to transfer to a new context. I will state the theorem, note a transformation to a very simple computational scheme, and illustrate the use of some techniques from the directed-loop algorithm in context of the wavefunction Monte Carlo method that is used to solve the Lindblad master equation for the dynamics of open quantum systems. I will end by noting that as the theorem does not depend on the source of the o.d.e.'s coming from quantum mechanics, it also enables the transfer of continuous-time methods from quantum Monte Carlo to the simulation of various classical equations of motion heretofore only solved deterministically.
Poisson-Box Sampling algorithms for three-dimensional Markov binary mixtures
NASA Astrophysics Data System (ADS)
Larmier, Coline; Zoia, Andrea; Malvagi, Fausto; Dumonteil, Eric; Mazzolo, Alain
2018-02-01
Particle transport in Markov mixtures can be addressed by the so-called Chord Length Sampling (CLS) methods, a family of Monte Carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on-the-fly the material interfaces crossed by the random walkers during their trajectories. Such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the Boltzmann equation for a large number of realizations of random media. CLS solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. In this work we propose a new family of algorithms (called 'Poisson Box Sampling', PBS) aimed at improving the accuracy of the CLS approach for transport in d-dimensional binary Markov mixtures. In order to probe the features of PBS methods, we will focus on three-dimensional Markov media and revisit the benchmark problem originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]: for these configurations we will compare reference solutions, standard CLS solutions and the new PBS solutions for scalar particle flux, transmission and reflection coefficients. PBS will be shown to perform better than CLS at the expense of a reasonable increase in computational time.
Analytical solutions for avalanche-breakdown voltages of single-diffused Gaussian junctions
NASA Astrophysics Data System (ADS)
Shenai, K.; Lin, H. C.
1983-03-01
Closed-form solutions of the potential difference between the two edges of the depletion layer of a single diffused Gaussian p-n junction are obtained by integrating Poisson's equation and equating the magnitudes of the positive and negative charges in the depletion layer. By using the closed form solution of the static Poisson's equation and Fulop's average ionization coefficient, the ionization integral in the depletion layer is computed, which yields the correct values of avalanche breakdown voltage, depletion layer thickness at breakdown, and the peak electric field as a function of junction depth. Newton's method is used for rapid convergence. A flowchart to perform the calculations with a programmable hand-held calculator, such as the TI-59, is shown.
Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing
NASA Astrophysics Data System (ADS)
Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.
2008-07-01
Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.
Massively Parallel Solution of Poisson Equation on Coarse Grain MIMD Architectures
NASA Technical Reports Server (NTRS)
Fijany, A.; Weinberger, D.; Roosta, R.; Gulati, S.
1998-01-01
In this paper a new algorithm, designated as Fast Invariant Imbedding algorithm, for solution of Poisson equation on vector and massively parallel MIMD architectures is presented. This algorithm achieves the same optimal computational efficiency as other Fast Poisson solvers while offering a much better structure for vector and parallel implementation. Our implementation on the Intel Delta and Paragon shows that a speedup of over two orders of magnitude can be achieved even for moderate size problems.
Newton/Poisson-Distribution Program
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Scheuer, Ernest M.
1990-01-01
NEWTPOIS, one of two computer programs making calculations involving cumulative Poisson distributions. NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714) used independently of one another. NEWTPOIS determines Poisson parameter for given cumulative probability, from which one obtains percentiles for gamma distributions with integer shape parameters and percentiles for X(sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Program written in C.
Applying the compound Poisson process model to the reporting of injury-related mortality rates.
Kegler, Scott R
2007-02-16
Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.
A generalized right truncated bivariate Poisson regression model with applications to health data.
Islam, M Ataharul; Chowdhury, Rafiqul I
2017-01-01
A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.
A generalized right truncated bivariate Poisson regression model with applications to health data
Islam, M. Ataharul; Chowdhury, Rafiqul I.
2017-01-01
A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model. PMID:28586344
Khan, Asaduzzaman; Western, Mark
The purpose of this study was to explore factors that facilitate or hinder effective use of computers in Australian general medical practice. This study is based on data extracted from a national telephone survey of 480 general practitioners (GPs) across Australia. Clinical functions performed by GPs using computers were examined using a zero-inflated Poisson (ZIP) regression modelling. About 17% of GPs were not using computer for any clinical function, while 18% reported using computers for all clinical functions. The ZIP model showed that computer anxiety was negatively associated with effective computer use, while practitioners' belief about usefulness of computers was positively associated with effective computer use. Being a female GP or working in partnership or group practice increased the odds of effectively using computers for clinical functions. To fully capitalise on the benefits of computer technology, GPs need to be convinced that this technology is useful and can make a difference.
A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.
Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua
2017-07-01
Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.
Prediction of unsteady transonic flow around missile configurations
NASA Technical Reports Server (NTRS)
Nixon, D.; Reisenthel, P. H.; Torres, T. O.; Klopfer, G. H.
1990-01-01
This paper describes the preliminary development of a method for predicting the unsteady transonic flow around missiles at transonic and supersonic speeds, with the final goal of developing a computer code for use in aeroelastic calculations or during maneuvers. The basic equations derived for this method are an extension of those derived by Klopfer and Nixon (1989) for steady flow and are a subset of the Euler equations. In this approach, the five Euler equations are reduced to an equation similar to the three-dimensional unsteady potential equation, and a two-dimensional Poisson equation. In addition, one of the equations in this method is almost identical to the potential equation for which there are well tested computer codes, allowing the development of a prediction method based in part on proved technology.
NASA Astrophysics Data System (ADS)
Shibata, Hisaichi; Takaki, Ryoji
2017-11-01
A novel method to compute current-voltage characteristics (CVCs) of direct current positive corona discharges is formulated based on a perturbation technique. We use linearized fluid equations coupled with the linearized Poisson's equation. Townsend relation is assumed to predict CVCs apart from the linearization point. We choose coaxial cylinders as a test problem, and we have successfully predicted parameters which can determine CVCs with arbitrary inner and outer radii. It is also confirmed that the proposed method essentially does not induce numerical instabilities.
A Method of Poisson's Ration Imaging Within a Material Part
NASA Technical Reports Server (NTRS)
Roth, Don J. (Inventor)
1994-01-01
The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention, longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to display the data.
Method of Poisson's ratio imaging within a material part
NASA Technical Reports Server (NTRS)
Roth, Don J. (Inventor)
1996-01-01
The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to displayed the image.
Simulation methods with extended stability for stiff biochemical Kinetics.
Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin
2010-08-11
With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.
Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels
ERIC Educational Resources Information Center
Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.
2018-01-01
A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…
Yelland, Lisa N; Salter, Amy B; Ryan, Philip
2011-10-15
Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.
A minimally-resolved immersed boundary model for reaction-diffusion problems
NASA Astrophysics Data System (ADS)
Pal Singh Bhalla, Amneet; Griffith, Boyce E.; Patankar, Neelesh A.; Donev, Aleksandar
2013-12-01
We develop an immersed boundary approach to modeling reaction-diffusion processes in dispersions of reactive spherical particles, from the diffusion-limited to the reaction-limited setting. We represent each reactive particle with a minimally-resolved "blob" using many fewer degrees of freedom per particle than standard discretization approaches. More complicated or more highly resolved particle shapes can be built out of a collection of reactive blobs. We demonstrate numerically that the blob model can provide an accurate representation at low to moderate packing densities of the reactive particles, at a cost not much larger than solving a Poisson equation in the same domain. Unlike multipole expansion methods, our method does not require analytically computed Green's functions, but rather, computes regularized discrete Green's functions on the fly by using a standard grid-based discretization of the Poisson equation. This allows for great flexibility in implementing different boundary conditions, coupling to fluid flow or thermal transport, and the inclusion of other effects such as temporal evolution and even nonlinearities. We develop multigrid-based preconditioners for solving the linear systems that arise when using implicit temporal discretizations or studying steady states. In the diffusion-limited case the resulting linear system is a saddle-point problem, the efficient solution of which remains a challenge for suspensions of many particles. We validate our method by comparing to published results on reaction-diffusion in ordered and disordered suspensions of reactive spheres.
NASA Astrophysics Data System (ADS)
Reimer, Ashton S.; Cheviakov, Alexei F.
2013-03-01
A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.
Poisson point process modeling for polyphonic music transcription.
Peeling, Paul; Li, Chung-fai; Godsill, Simon
2007-04-01
Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.
Computational Aspects of N-Mixture Models
Dennis, Emily B; Morgan, Byron JT; Ridout, Martin S
2015-01-01
The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105–115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. PMID:25314629
Anomaly Detection in Dynamic Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turcotte, Melissa
2014-10-14
Anomaly detection in dynamic communication networks has many important security applications. These networks can be extremely large and so detecting any changes in their structure can be computationally challenging; hence, computationally fast, parallelisable methods for monitoring the network are paramount. For this reason the methods presented here use independent node and edge based models to detect locally anomalous substructures within communication networks. As a first stage, the aim is to detect changes in the data streams arising from node or edge communications. Throughout the thesis simple, conjugate Bayesian models for counting processes are used to model these data streams. Amore » second stage of analysis can then be performed on a much reduced subset of the network comprising nodes and edges which have been identified as potentially anomalous in the first stage. The first method assumes communications in a network arise from an inhomogeneous Poisson process with piecewise constant intensity. Anomaly detection is then treated as a changepoint problem on the intensities. The changepoint model is extended to incorporate seasonal behavior inherent in communication networks. This seasonal behavior is also viewed as a changepoint problem acting on a piecewise constant Poisson process. In a static time frame, inference is made on this extended model via a Gibbs sampling strategy. In a sequential time frame, where the data arrive as a stream, a novel, fast Sequential Monte Carlo (SMC) algorithm is introduced to sample from the sequence of posterior distributions of the change points over time. A second method is considered for monitoring communications in a large scale computer network. The usage patterns in these types of networks are very bursty in nature and don’t fit a Poisson process model. For tractable inference, discrete time models are considered, where the data are aggregated into discrete time periods and probability models are fitted to the communication counts. In a sequential analysis, anomalous behavior is then identified from outlying behavior with respect to the fitted predictive probability models. Seasonality is again incorporated into the model and is treated as a changepoint model on the transition probabilities of a discrete time Markov process. Second stage analytics are then developed which combine anomalous edges to identify anomalous substructures in the network.« less
A test of inflated zeros for Poisson regression models.
He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan
2017-01-01
Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalashnikova, Irina
2012-05-01
A numerical study aimed to evaluate different preconditioners within the Trilinos Ifpack and ML packages for the Quantum Computer Aided Design (QCAD) non-linear Poisson problem implemented within the Albany code base and posed on the Ottawa Flat 270 design geometry is performed. This study led to some new development of Albany that allows the user to select an ML preconditioner with Zoltan repartitioning based on nodal coordinates, which is summarized. Convergence of the numerical solutions computed within the QCAD computational suite with successive mesh refinement is examined in two metrics, the mean value of the solution (an L{sup 1} norm)more » and the field integral of the solution (L{sup 2} norm).« less
Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.
Mao, Tianqi; Wang, Zhaocheng; Wang, Qi
2017-01-23
Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.
ColDICE: A parallel Vlasov–Poisson solver using moving adaptive simplicial tessellation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousbie, Thierry, E-mail: tsousbie@gmail.com; Department of Physics, The University of Tokyo, Tokyo 113-0033; Research Center for the Early Universe, School of Science, The University of Tokyo, Tokyo 113-0033
2016-09-15
Resolving numerically Vlasov–Poisson equations for initially cold systems can be reduced to following the evolution of a three-dimensional sheet evolving in six-dimensional phase-space. We describe a public parallel numerical algorithm consisting in representing the phase-space sheet with a conforming, self-adaptive simplicial tessellation of which the vertices follow the Lagrangian equations of motion. The algorithm is implemented both in six- and four-dimensional phase-space. Refinement of the tessellation mesh is performed using the bisection method and a local representation of the phase-space sheet at second order relying on additional tracers created when needed at runtime. In order to preserve in the bestmore » way the Hamiltonian nature of the system, refinement is anisotropic and constrained by measurements of local Poincaré invariants. Resolution of Poisson equation is performed using the fast Fourier method on a regular rectangular grid, similarly to particle in cells codes. To compute the density projected onto this grid, the intersection of the tessellation and the grid is calculated using the method of Franklin and Kankanhalli [65–67] generalised to linear order. As preliminary tests of the code, we study in four dimensional phase-space the evolution of an initially small patch in a chaotic potential and the cosmological collapse of a fluctuation composed of two sinusoidal waves. We also perform a “warm” dark matter simulation in six-dimensional phase-space that we use to check the parallel scaling of the code.« less
Anisotropic norm-oriented mesh adaptation for a Poisson problem
NASA Astrophysics Data System (ADS)
Brèthes, Gautier; Dervieux, Alain
2016-10-01
We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.
Filtering with Marked Point Process Observations via Poisson Chaos Expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun Wei, E-mail: wsun@mathstat.concordia.ca; Zeng Yong, E-mail: zengy@umkc.edu; Zhang Shu, E-mail: zhangshuisme@hotmail.com
2013-06-15
We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical schememore » based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.« less
De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos
2014-06-01
Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cristina, S.; Feliziani, M.
1995-11-01
This paper describes a new procedure for the numerical computation of the electric field and current density distributions in a dc electrostatic precipitator in the presence of dust, taking into account the particle-size distribution. Poisson`s and continuity equations are numerically solved by supposing that the coronating conductors satisfy Kaptzov`s assumption on the emitter surfaces. Two iterative numerical procedures, both based on the finite element method (FEM), are implemented for evaluating, respectively, the unknown ionic charge density and the particle charge density distributions. The V-I characteristic and the precipitation efficiencies for the individual particle-size classes, calculated with reference to the pilotmore » precipitator installed by ENEL (Italian Electricity Board) at its Marghera (Venice) coal-fired power station, are found to be very close to those measured experimentally.« less
NASA Astrophysics Data System (ADS)
Long, Kai; Yuan, Philip F.; Xu, Shanqing; Xie, Yi Min
2018-04-01
Most studies on composites assume that the constituent phases have different values of stiffness. Little attention has been paid to the effect of constituent phases having distinct Poisson's ratios. This research focuses on a concurrent optimization method for simultaneously designing composite structures and materials with distinct Poisson's ratios. The proposed method aims to minimize the mean compliance of the macrostructure with a given mass of base materials. In contrast to the traditional interpolation of the stiffness matrix through numerical results, an interpolation scheme of the Young's modulus and Poisson's ratio using different parameters is adopted. The numerical results demonstrate that the Poisson effect plays a key role in reducing the mean compliance of the final design. An important contribution of the present study is that the proposed concurrent optimization method can automatically distribute base materials with distinct Poisson's ratios between the macrostructural and microstructural levels under a single constraint of the total mass.
On-the-fly Numerical Surface Integration for Finite-Difference Poisson-Boltzmann Methods.
Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray
2011-11-01
Most implicit solvation models require the definition of a molecular surface as the interface that separates the solute in atomic detail from the solvent approximated as a continuous medium. Commonly used surface definitions include the solvent accessible surface (SAS), the solvent excluded surface (SES), and the van der Waals surface. In this study, we present an efficient numerical algorithm to compute the SES and SAS areas to facilitate the applications of finite-difference Poisson-Boltzmann methods in biomolecular simulations. Different from previous numerical approaches, our algorithm is physics-inspired and intimately coupled to the finite-difference Poisson-Boltzmann methods to fully take advantage of its existing data structures. Our analysis shows that the algorithm can achieve very good agreement with the analytical method in the calculation of the SES and SAS areas. Specifically, in our comprehensive test of 1,555 molecules, the average unsigned relative error is 0.27% in the SES area calculations and 1.05% in the SAS area calculations at the grid spacing of 1/2Å. In addition, a systematic correction analysis can be used to improve the accuracy for the coarse-grid SES area calculations, with the average unsigned relative error in the SES areas reduced to 0.13%. These validation studies indicate that the proposed algorithm can be applied to biomolecules over a broad range of sizes and structures. Finally, the numerical algorithm can also be adapted to evaluate the surface integral of either a vector field or a scalar field defined on the molecular surface for additional solvation energetics and force calculations.
Intertime jump statistics of state-dependent Poisson processes.
Daly, Edoardo; Porporato, Amilcare
2007-01-01
A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.
Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.
Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata
2008-09-01
A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.
Online sequential Monte Carlo smoother for partially observed diffusion processes
NASA Astrophysics Data System (ADS)
Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain
2018-12-01
This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping
2011-01-01
Background Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. Results In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Conclusions Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset. PMID:21978359
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping.
Hampton, Kristen H; Serre, Marc L; Gesink, Dionne C; Pilcher, Christopher D; Miller, William C
2011-10-06
Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset.
Computational modeling and analysis of thermoelectric properties of nanoporous silicon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, H.; Yu, Y.; Li, G., E-mail: gli@clemson.edu
2014-03-28
In this paper, thermoelectric properties of nanoporous silicon are modeled and studied by using a computational approach. The computational approach combines a quantum non-equilibrium Green's function (NEGF) coupled with the Poisson equation for electrical transport analysis, a phonon Boltzmann transport equation (BTE) for phonon thermal transport analysis and the Wiedemann-Franz law for calculating the electronic thermal conductivity. By solving the NEGF/Poisson equations self-consistently using a finite difference method, the electrical conductivity σ and Seebeck coefficient S of the material are numerically computed. The BTE is solved by using a finite volume method to obtain the phonon thermal conductivity k{sub p}more » and the Wiedemann-Franz law is used to obtain the electronic thermal conductivity k{sub e}. The figure of merit of nanoporous silicon is calculated by ZT=S{sup 2}σT/(k{sub p}+k{sub e}). The effects of doping density, porosity, temperature, and nanopore size on thermoelectric properties of nanoporous silicon are investigated. It is confirmed that nanoporous silicon has significantly higher thermoelectric energy conversion efficiency than its nonporous counterpart. Specifically, this study shows that, with a n-type doping density of 10{sup 20} cm{sup –3}, a porosity of 36% and nanopore size of 3 nm × 3 nm, the figure of merit ZT can reach 0.32 at 600 K. The results also show that the degradation of electrical conductivity of nanoporous Si due to the inclusion of nanopores is compensated by the large reduction in the phonon thermal conductivity and increase of absolute value of the Seebeck coefficient, resulting in a significantly improved ZT.« less
Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A
2016-09-06
Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Xiao; Li, Huijian; Hu, Minzheng; Liu, Zeliang; Wärnå, John; Cao, Yuying; Ahuja, Rajeev; Luo, Wei
2018-04-01
A method to obtain the equivalent Poisson's ratio in chemical bonds as classical beams with finite element method was proposed from experimental data. The UFF (Universal Force Field) method was employed to calculate the elastic force constants of Zrsbnd O bonds. By applying the equivalent Poisson's ratio, the mechanical properties of single-wall ZrNTs (ZrO2 nanotubes) were investigated by finite element analysis. The nanotubes' Young's modulus (Y), Poisson's ratio (ν) of ZrNTs as function of diameters, length and chirality have been discussed, respectively. We found that the Young's modulus of single-wall ZrNTs is calculated to be between 350 and 420 GPa.
Method for resonant measurement
Rhodes, George W.; Migliori, Albert; Dixon, Raymond D.
1996-01-01
A method of measurement of objects to determine object flaws, Poisson's ratio (.sigma.) and shear modulus (.mu.) is shown and described. First, the frequency for expected degenerate responses is determined for one or more input frequencies and then splitting of degenerate resonant modes are observed to identify the presence of flaws in the object. Poisson's ratio and the shear modulus can be determined by identification of resonances dependent only on the shear modulus, and then using that shear modulus to find Poisson's ratio using other modes dependent on both the shear modulus and Poisson's ratio.
Transport Equation Based Wall Distance Computations Aimed at Flows With Time-Dependent Geometry
NASA Technical Reports Server (NTRS)
Tucker, Paul G.; Rumsey, Christopher L.; Bartels, Robert E.; Biedron, Robert T.
2003-01-01
Eikonal, Hamilton-Jacobi and Poisson equations can be used for economical nearest wall distance computation and modification. Economical computations may be especially useful for aeroelastic and adaptive grid problems for which the grid deforms, and the nearest wall distance needs to be repeatedly computed. Modifications are directed at remedying turbulence model defects. For complex grid structures, implementation of the Eikonal and Hamilton-Jacobi approaches is not straightforward. This prohibits their use in industrial CFD solvers. However, both the Eikonal and Hamilton-Jacobi equations can be written in advection and advection-diffusion forms, respectively. These, like the Poisson s Laplacian, are commonly occurring industrial CFD solver elements. Use of the NASA CFL3D code to solve the Eikonal and Hamilton-Jacobi equations in advective-based forms is explored. The advection-based distance equations are found to have robust convergence. Geometries studied include single and two element airfoils, wing body and double delta configurations along with a complex electronics system. It is shown that for Eikonal accuracy, upwind metric differences are required. The Poisson approach is found effective and, since it does not require offset metric evaluations, easiest to implement. The sensitivity of flow solutions to wall distance assumptions is explored. Generally, results are not greatly affected by wall distance traits.
Transport Equation Based Wall Distance Computations Aimed at Flows With Time-Dependent Geometry
NASA Technical Reports Server (NTRS)
Tucker, Paul G.; Rumsey, Christopher L.; Bartels, Robert E.; Biedron, Robert T.
2003-01-01
Eikonal, Hamilton-Jacobi and Poisson equations can be used for economical nearest wall distance computation and modification. Economical computations may be especially useful for aeroelastic and adaptive grid problems for which the grid deforms, and the nearest wall distance needs to be repeatedly computed. Modifications are directed at remedying turbulence model defects. For complex grid structures, implementation of the Eikonal and Hamilton-Jacobi approaches is not straightforward. This prohibits their use in industrial CFD solvers. However, both the Eikonal and Hamilton-Jacobi equations can be written in advection and advection-diffusion forms, respectively. These, like the Poisson's Laplacian, are commonly occurring industrial CFD solver elements. Use of the NASA CFL3D code to solve the Eikonal and Hamilton-Jacobi equations in advective-based forms is explored. The advection-based distance equations are found to have robust convergence. Geometries studied include single and two element airfoils, wing body and double delta configurations along with a complex electronics system. It is shown that for Eikonal accuracy, upwind metric differences are required. The Poisson approach is found effective and, since it does not require offset metric evaluations, easiest to implement. The sensitivity of flow solutions to wall distance assumptions is explored. Generally, results are not greatly affected by wall distance traits.
A novel method for the accurate evaluation of Poisson's ratio of soft polymer materials.
Lee, Jae-Hoon; Lee, Sang-Soo; Chang, Jun-Dong; Thompson, Mark S; Kang, Dong-Joong; Park, Sungchan; Park, Seonghun
2013-01-01
A new method with a simple algorithm was developed to accurately measure Poisson's ratio of soft materials such as polyvinyl alcohol hydrogel (PVA-H) with a custom experimental apparatus consisting of a tension device, a micro X-Y stage, an optical microscope, and a charge-coupled device camera. In the proposed method, the initial positions of the four vertices of an arbitrarily selected quadrilateral from the sample surface were first measured to generate a 2D 1st-order 4-node quadrilateral element for finite element numerical analysis. Next, minimum and maximum principal strains were calculated from differences between the initial and deformed shapes of the quadrilateral under tension. Finally, Poisson's ratio of PVA-H was determined by the ratio of minimum principal strain to maximum principal strain. This novel method has an advantage in the accurate evaluation of Poisson's ratio despite misalignment between specimens and experimental devices. In this study, Poisson's ratio of PVA-H was 0.44 ± 0.025 (n = 6) for 2.6-47.0% elongations with a tendency to decrease with increasing elongation. The current evaluation method of Poisson's ratio with a simple measurement system can be employed to a real-time automated vision-tracking system which is used to accurately evaluate the material properties of various soft materials.
Understanding poisson regression.
Hayat, Matthew J; Higgins, Melinda
2014-04-01
Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.
Symplectic discretization for spectral element solution of Maxwell's equations
NASA Astrophysics Data System (ADS)
Zhao, Yanmin; Dai, Guidong; Tang, Yifa; Liu, Qinghuo
2009-08-01
Applying the spectral element method (SEM) based on the Gauss-Lobatto-Legendre (GLL) polynomial to discretize Maxwell's equations, we obtain a Poisson system or a Poisson system with at most a perturbation. For the system, we prove that any symplectic partitioned Runge-Kutta (PRK) method preserves the Poisson structure and its implied symplectic structure. Numerical examples show the high accuracy of SEM and the benefit of conserving energy due to the use of symplectic methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Bin; Pettitt, Bernard M.
Electrostatic free energies of solvation for 15 neutral amino acid side chain analogs are computed. We compare three methods of varying computational complexity and accuracy for three force fields: free energy simulations, Poisson-Boltzmann (PB), and linear response approximation (LRA) using AMBER, CHARMM, and OPLSAA force fields. We find that deviations from simulation start at low charges for solutes. The approximate PB and LRA produce an overestimation of electrostatic solvation free energies for most of molecules studied here. These deviations are remarkably systematic. The variations among force fields are almost as large as the variations found among methods. Our study confirmsmore » that success of the approximate methods for electrostatic solvation free energies comes from their ability to evaluate free energy differences accurately.« less
Bayesian Computation for Log-Gaussian Cox Processes: A Comparative Analysis of Methods
Teng, Ming; Nathoo, Farouk S.; Johnson, Timothy D.
2017-01-01
The Log-Gaussian Cox Process is a commonly used model for the analysis of spatial point pattern data. Fitting this model is difficult because of its doubly-stochastic property, i.e., it is an hierarchical combination of a Poisson process at the first level and a Gaussian Process at the second level. Various methods have been proposed to estimate such a process, including traditional likelihood-based approaches as well as Bayesian methods. We focus here on Bayesian methods and several approaches that have been considered for model fitting within this framework, including Hamiltonian Monte Carlo, the Integrated nested Laplace approximation, and Variational Bayes. We consider these approaches and make comparisons with respect to statistical and computational efficiency. These comparisons are made through several simulation studies as well as through two applications, the first examining ecological data and the second involving neuroimaging data. PMID:29200537
Higueras, Manuel; González, J E; Di Giorgio, Marina; Barquinero, J F
2018-06-13
To present Poisson exact goodness-of-fit tests as alternatives and complements to the asymptotic u-test, which is the most widely used in cytogenetic biodosimetry, to decide whether a sample of chromosomal aberrations in blood cells comes from an homogeneous or inhomogeneous exposure. Three Poisson exact goodness-of-fit test from the literature are introduced and implemented in the R environment. A Shiny R Studio application, named GOF Poisson, has been updated for the purpose of giving support to this work. The three exact tests and the u-test are applied in chromosomal aberration data from clinical and accidental radiation exposure patients. It is observed how the u-test is not an appropriate approximation in small samples with small yield of chromosomal aberrations. Tools are provided to compute the three exact tests, which is not as trivial as the implementation of the u-test. Poisson exact goodness-of-fit tests should be considered jointly to the u-test for detecting inhomogeneous exposures in the cytogenetic biodosimetry practice.
Quantization of Poisson Manifolds from the Integrability of the Modular Function
NASA Astrophysics Data System (ADS)
Bonechi, F.; Ciccoli, N.; Qiu, J.; Tarlini, M.
2014-10-01
We discuss a framework for quantizing a Poisson manifold via the quantization of its symplectic groupoid, combining the tools of geometric quantization with the results of Renault's theory of groupoid C*-algebras. This setting allows very singular polarizations. In particular, we consider the case when the modular function is multiplicatively integrable, i.e., when the space of leaves of the polarization inherits a groupoid structure. If suitable regularity conditions are satisfied, then one can define the quantum algebra as the convolution algebra of the subgroupoid of leaves satisfying the Bohr-Sommerfeld conditions. We apply this procedure to the case of a family of Poisson structures on , seen as Poisson homogeneous spaces of the standard Poisson-Lie group SU( n + 1). We show that a bihamiltonian system on defines a multiplicative integrable model on the symplectic groupoid; we compute the Bohr-Sommerfeld groupoid and show that it satisfies the needed properties for applying Renault theory. We recover and extend Sheu's description of quantum homogeneous spaces as groupoid C*-algebras.
NASA Astrophysics Data System (ADS)
Koehl, Patrice; Orland, Henri; Delarue, Marc
2011-08-01
We present an extension of the self-consistent mean field theory for protein side-chain modeling in which solvation effects are included based on the Poisson-Boltzmann (PB) theory. In this approach, the protein is represented with multiple copies of its side chains. Each copy is assigned a weight that is refined iteratively based on the mean field energy generated by the rest of the protein, until self-consistency is reached. At each cycle, the variational free energy of the multi-copy system is computed; this free energy includes the internal energy of the protein that accounts for vdW and electrostatics interactions and a solvation free energy term that is computed using the PB equation. The method converges in only a few cycles and takes only minutes of central processing unit time on a commodity personal computer. The predicted conformation of each residue is then set to be its copy with the highest weight after convergence. We have tested this method on a database of hundred highly refined NMR structures to circumvent the problems of crystal packing inherent to x-ray structures. The use of the PB-derived solvation free energy significantly improves prediction accuracy for surface side chains. For example, the prediction accuracies for χ1 for surface cysteine, serine, and threonine residues improve from 68%, 35%, and 43% to 80%, 53%, and 57%, respectively. A comparison with other side-chain prediction algorithms demonstrates that our approach is consistently better in predicting the conformations of exposed side chains.
NASA Astrophysics Data System (ADS)
Caillol, J. M.; Levesque, D.
1992-01-01
The reliability and the efficiency of a new method suitable for the simulations of dielectric fluids and ionic solutions is established by numerical computations. The efficiency depends on the use of a simulation cell which is the surface of a four-dimensional sphere. The reliability originates from a charge-charge potential solution of the Poisson equation in this confining volume. The computation time, for systems of a few hundred molecules, is reduced by a factor of 2 or 3 compared to this of a simulation performed in a cubic volume with periodic boundary conditions and the Ewald charge-charge potential.
Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.
Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram
2017-02-01
In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.
A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise
NASA Astrophysics Data System (ADS)
Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno
2017-09-01
While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.
Spatial event cluster detection using an approximate normal distribution.
Torabi, Mahmoud; Rosychuk, Rhonda J
2008-12-12
In geographic surveillance of disease, areas with large numbers of disease cases are to be identified so that investigations of the causes of high disease rates can be pursued. Areas with high rates are called disease clusters and statistical cluster detection tests are used to identify geographic areas with higher disease rates than expected by chance alone. Typically cluster detection tests are applied to incident or prevalent cases of disease, but surveillance of disease-related events, where an individual may have multiple events, may also be of interest. Previously, a compound Poisson approach that detects clusters of events by testing individual areas that may be combined with their neighbours has been proposed. However, the relevant probabilities from the compound Poisson distribution are obtained from a recursion relation that can be cumbersome if the number of events are large or analyses by strata are performed. We propose a simpler approach that uses an approximate normal distribution. This method is very easy to implement and is applicable to situations where the population sizes are large and the population distribution by important strata may differ by area. We demonstrate the approach on pediatric self-inflicted injury presentations to emergency departments and compare the results for probabilities based on the recursion and the normal approach. We also implement a Monte Carlo simulation to study the performance of the proposed approach. In a self-inflicted injury data example, the normal approach identifies twelve out of thirteen of the same clusters as the compound Poisson approach, noting that the compound Poisson method detects twelve significant clusters in total. Through simulation studies, the normal approach well approximates the compound Poisson approach for a variety of different population sizes and case and event thresholds. A drawback of the compound Poisson approach is that the relevant probabilities must be determined through a recursion relation and such calculations can be computationally intensive if the cluster size is relatively large or if analyses are conducted with strata variables. On the other hand, the normal approach is very flexible, easily implemented, and hence, more appealing for users. Moreover, the concepts may be more easily conveyed to non-statisticians interested in understanding the methodology associated with cluster detection test results.
Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.
Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio
2014-11-24
The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine stratification.
Fractional Poisson-Nernst-Planck Model for Ion Channels I: Basic Formulations and Algorithms.
Chen, Duan
2017-11-01
In this work, we propose a fractional Poisson-Nernst-Planck model to describe ion permeation in gated ion channels. Due to the intrinsic conformational changes, crowdedness in narrow channel pores, binding and trapping introduced by functioning units of channel proteins, ionic transport in the channel exhibits a power-law-like anomalous diffusion dynamics. We start from continuous-time random walk model for a single ion and use a long-tailed density distribution function for the particle jump waiting time, to derive the fractional Fokker-Planck equation. Then, it is generalized to the macroscopic fractional Poisson-Nernst-Planck model for ionic concentrations. Necessary computational algorithms are designed to implement numerical simulations for the proposed model, and the dynamics of gating current is investigated. Numerical simulations show that the fractional PNP model provides a more qualitatively reasonable match to the profile of gating currents from experimental observations. Meanwhile, the proposed model motivates new challenges in terms of mathematical modeling and computations.
Solving the Fluid Pressure Poisson Equation Using Multigrid-Evaluation and Improvements.
Dick, Christian; Rogowsky, Marcus; Westermann, Rudiger
2016-11-01
In many numerical simulations of fluids governed by the incompressible Navier-Stokes equations, the pressure Poisson equation needs to be solved to enforce mass conservation. Multigrid solvers show excellent convergence in simple scenarios, yet they can converge slowly in domains where physically separated regions are combined at coarser scales. Moreover, existing multigrid solvers are tailored to specific discretizations of the pressure Poisson equation, and they cannot easily be adapted to other discretizations. In this paper we analyze the convergence properties of existing multigrid solvers for the pressure Poisson equation in different simulation domains, and we show how to further improve the multigrid convergence rate by using a graph-based extension to determine the coarse grid hierarchy. The proposed multigrid solver is generic in that it can be applied to different kinds of discretizations of the pressure Poisson equation, by using solely the specification of the simulation domain and pre-assembled computational stencils. We analyze the proposed solver in combination with finite difference and finite volume discretizations of the pressure Poisson equation. Our evaluations show that, despite the common assumption, multigrid schemes can exploit their potential even in the most complicated simulation scenarios, yet this behavior is obtained at the price of higher memory consumption.
Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil
2012-01-01
The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480
Irreversible thermodynamics of Poisson processes with reaction.
Méndez, V; Fort, J
1999-11-01
A kinetic model is derived to study the successive movements of particles, described by a Poisson process, as well as their generation. The irreversible thermodynamics of this system is also studied from the kinetic model. This makes it possible to evaluate the differences between thermodynamical quantities computed exactly and up to second-order. Such differences determine the range of validity of the second-order approximation to extended irreversible thermodynamics.
APBSmem: A Graphical Interface for Electrostatic Calculations at the Membrane
Callenberg, Keith M.; Choudhary, Om P.; de Forest, Gabriel L.; Gohara, David W.; Baker, Nathan A.; Grabe, Michael
2010-01-01
Electrostatic forces are one of the primary determinants of molecular interactions. They help guide the folding of proteins, increase the binding of one protein to another and facilitate protein-DNA and protein-ligand binding. A popular method for computing the electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation, and there are several easy-to-use software packages available that solve the PB equation for soluble proteins. Here we present a freely available program, called APBSmem, for carrying out these calculations in the presence of a membrane. The Adaptive Poisson-Boltzmann Solver (APBS) is used as a back-end for solving the PB equation, and a Java-based graphical user interface (GUI) coordinates a set of routines that introduce the influence of the membrane, determine its placement relative to the protein, and set the membrane potential. The software Jmol is embedded in the GUI to visualize the protein inserted in the membrane before the calculation and the electrostatic potential after completing the computation. We expect that the ease with which the GUI allows one to carry out these calculations will make this software a useful resource for experimenters and computational researchers alike. Three examples of membrane protein electrostatic calculations are carried out to illustrate how to use APBSmem and to highlight the different quantities of interest that can be calculated. PMID:20949122
APBSmem: a graphical interface for electrostatic calculations at the membrane.
Callenberg, Keith M; Choudhary, Om P; de Forest, Gabriel L; Gohara, David W; Baker, Nathan A; Grabe, Michael
2010-09-29
Electrostatic forces are one of the primary determinants of molecular interactions. They help guide the folding of proteins, increase the binding of one protein to another and facilitate protein-DNA and protein-ligand binding. A popular method for computing the electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation, and there are several easy-to-use software packages available that solve the PB equation for soluble proteins. Here we present a freely available program, called APBSmem, for carrying out these calculations in the presence of a membrane. The Adaptive Poisson-Boltzmann Solver (APBS) is used as a back-end for solving the PB equation, and a Java-based graphical user interface (GUI) coordinates a set of routines that introduce the influence of the membrane, determine its placement relative to the protein, and set the membrane potential. The software Jmol is embedded in the GUI to visualize the protein inserted in the membrane before the calculation and the electrostatic potential after completing the computation. We expect that the ease with which the GUI allows one to carry out these calculations will make this software a useful resource for experimenters and computational researchers alike. Three examples of membrane protein electrostatic calculations are carried out to illustrate how to use APBSmem and to highlight the different quantities of interest that can be calculated.
Goovaerts, Pierre
2006-01-01
Boundary analysis of cancer maps may highlight areas where causative exposures change through geographic space, the presence of local populations with distinct cancer incidences, or the impact of different cancer control methods. Too often, such analysis ignores the spatial pattern of incidence or mortality rates and overlooks the fact that rates computed from sparsely populated geographic entities can be very unreliable. This paper proposes a new methodology that accounts for the uncertainty and spatial correlation of rate data in the detection of significant edges between adjacent entities or polygons. Poisson kriging is first used to estimate the risk value and the associated standard error within each polygon, accounting for the population size and the risk semivariogram computed from raw rates. The boundary statistic is then defined as half the absolute difference between kriged risks. Its reference distribution, under the null hypothesis of no boundary, is derived through the generation of multiple realizations of the spatial distribution of cancer risk values. This paper presents three types of neutral models generated using methods of increasing complexity: the common random shuffle of estimated risk values, a spatial re-ordering of these risks, or p-field simulation that accounts for the population size within each polygon. The approach is illustrated using age-adjusted pancreatic cancer mortality rates for white females in 295 US counties of the Northeast (1970–1994). Simulation studies demonstrate that Poisson kriging yields more accurate estimates of the cancer risk and how its value changes between polygons (i.e. boundary statistic), relatively to the use of raw rates or local empirical Bayes smoother. When used in conjunction with spatial neutral models generated by p-field simulation, the boundary analysis based on Poisson kriging estimates minimizes the proportion of type I errors (i.e. edges wrongly declared significant) while the frequency of these errors is predicted well by the p-value of the statistical test. PMID:19023455
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, W.J.; Cox, D.D.; Martz, H.F.
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less
Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.
2010-01-01
Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792
IETI – Isogeometric Tearing and Interconnecting
Kleiss, Stefan K.; Pechstein, Clemens; Jüttler, Bert; Tomar, Satyendra
2012-01-01
Finite Element Tearing and Interconnecting (FETI) methods are a powerful approach to designing solvers for large-scale problems in computational mechanics. The numerical simulation problem is subdivided into a number of independent sub-problems, which are then coupled in appropriate ways. NURBS- (Non-Uniform Rational B-spline) based isogeometric analysis (IGA) applied to complex geometries requires to represent the computational domain as a collection of several NURBS geometries. Since there is a natural decomposition of the computational domain into several subdomains, NURBS-based IGA is particularly well suited for using FETI methods. This paper proposes the new IsogEometric Tearing and Interconnecting (IETI) method, which combines the advanced solver design of FETI with the exact geometry representation of IGA. We describe the IETI framework for two classes of simple model problems (Poisson and linearized elasticity) and discuss the coupling of the subdomains along interfaces (both for matching interfaces and for interfaces with T-joints, i.e. hanging nodes). Special attention is paid to the construction of a suitable preconditioner for the iterative linear solver used for the interface problem. We report several computational experiments to demonstrate the performance of the proposed IETI method. PMID:24511167
Electronic health record analysis via deep poisson factor models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henao, Ricardo; Lu, James T.; Lucas, Joseph E.
Electronic Health Record (EHR) phenotyping utilizes patient data captured through normal medical practice, to identify features that may represent computational medical phenotypes. These features may be used to identify at-risk patients and improve prediction of patient morbidity and mortality. We present a novel deep multi-modality architecture for EHR analysis (applicable to joint analysis of multiple forms of EHR data), based on Poisson Factor Analysis (PFA) modules. Each modality, composed of observed counts, is represented as a Poisson distribution, parameterized in terms of hidden binary units. In-formation from different modalities is shared via a deep hierarchy of common hidden units. Activationmore » of these binary units occurs with probability characterized as Bernoulli-Poisson link functions, instead of more traditional logistic link functions. In addition, we demon-strate that PFA modules can be adapted to discriminative modalities. To compute model parameters, we derive efficient Markov Chain Monte Carlo (MCMC) inference that scales efficiently, with significant computational gains when compared to related models based on logistic link functions. To explore the utility of these models, we apply them to a subset of patients from the Duke-Durham patient cohort. We identified a cohort of over 12,000 patients with Type 2 Diabetes Mellitus (T2DM) based on diagnosis codes and laboratory tests out of our patient population of over 240,000. Examining the common hidden units uniting the PFA modules, we identify patient features that represent medical concepts. Experiments indicate that our learned features are better able to predict mortality and morbidity than clinical features identified previously in a large-scale clinical trial.« less
Electronic health record analysis via deep poisson factor models
Henao, Ricardo; Lu, James T.; Lucas, Joseph E.; ...
2016-01-01
Electronic Health Record (EHR) phenotyping utilizes patient data captured through normal medical practice, to identify features that may represent computational medical phenotypes. These features may be used to identify at-risk patients and improve prediction of patient morbidity and mortality. We present a novel deep multi-modality architecture for EHR analysis (applicable to joint analysis of multiple forms of EHR data), based on Poisson Factor Analysis (PFA) modules. Each modality, composed of observed counts, is represented as a Poisson distribution, parameterized in terms of hidden binary units. In-formation from different modalities is shared via a deep hierarchy of common hidden units. Activationmore » of these binary units occurs with probability characterized as Bernoulli-Poisson link functions, instead of more traditional logistic link functions. In addition, we demon-strate that PFA modules can be adapted to discriminative modalities. To compute model parameters, we derive efficient Markov Chain Monte Carlo (MCMC) inference that scales efficiently, with significant computational gains when compared to related models based on logistic link functions. To explore the utility of these models, we apply them to a subset of patients from the Duke-Durham patient cohort. We identified a cohort of over 12,000 patients with Type 2 Diabetes Mellitus (T2DM) based on diagnosis codes and laboratory tests out of our patient population of over 240,000. Examining the common hidden units uniting the PFA modules, we identify patient features that represent medical concepts. Experiments indicate that our learned features are better able to predict mortality and morbidity than clinical features identified previously in a large-scale clinical trial.« less
An implicit boundary integral method for computing electric potential of macromolecules in solvent
NASA Astrophysics Data System (ADS)
Zhong, Yimin; Ren, Kui; Tsai, Richard
2018-04-01
A numerical method using implicit surface representations is proposed to solve the linearized Poisson-Boltzmann equation that arises in mathematical models for the electrostatics of molecules in solvent. The proposed method uses an implicit boundary integral formulation to derive a linear system defined on Cartesian nodes in a narrowband surrounding the closed surface that separates the molecule and the solvent. The needed implicit surface is constructed from the given atomic description of the molecules, by a sequence of standard level set algorithms. A fast multipole method is applied to accelerate the solution of the linear system. A few numerical studies involving some standard test cases are presented and compared to other existing results.
Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.
Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence
2012-12-01
A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.
Poisson's ratio of fiber-reinforced composites
NASA Astrophysics Data System (ADS)
Christiansson, Henrik; Helsing, Johan
1996-05-01
Poisson's ratio flow diagrams, that is, the Poisson's ratio versus the fiber fraction, are obtained numerically for hexagonal arrays of elastic circular fibers in an elastic matrix. High numerical accuracy is achieved through the use of an interface integral equation method. Questions concerning fixed point theorems and the validity of existing asymptotic relations are investigated and partially resolved. Our findings for the transverse effective Poisson's ratio, together with earlier results for random systems by other authors, make it possible to formulate a general statement for Poisson's ratio flow diagrams: For composites with circular fibers and where the phase Poisson's ratios are equal to 1/3, the system with the lowest stiffness ratio has the highest Poisson's ratio. For other choices of the elastic moduli for the phases, no simple statement can be made.
Computational wave dynamics for innovative design of coastal structures
GOTOH, Hitoshi; OKAYASU, Akio
2017-01-01
For innovative designs of coastal structures, Numerical Wave Flumes (NWFs), which are solvers of Navier-Stokes equation for free-surface flows, are key tools. In this article, various methods and techniques for NWFs are overviewed. In the former half, key techniques of NWFs, namely the interface capturing (MAC, VOF, C-CUP) and significance of NWFs in comparison with the conventional wave models are described. In the latter part of this article, recent improvements of the particle method are shown as one of cores of NWFs. Methods for attenuating unphysical pressure fluctuation and improving accuracy, such as CMPS method for momentum conservation, Higher-order Source of Poisson Pressure Equation (PPE), Higher-order Laplacian, Error-Compensating Source in PPE, and Gradient Correction for ensuring Taylor-series consistency, are reviewed briefly. Finally, the latest new frontier of the accurate particle method, including Dynamic Stabilization for providing minimum-required artificial repulsive force to improve stability of computation, and Space Potential Particle for describing the exact free-surface boundary condition, is described. PMID:29021506
Radial mixing in turbomachines
NASA Astrophysics Data System (ADS)
Segaert, P.; Hirsch, Ch.; Deruyck, J.
1991-03-01
A method for computing the effects of radial mixing in a turbomachinery blade row has been developed. The method fits in the framework of a quasi-3D flow computation and hence is applied in a corrective fashion to through flow distributions. The method takes into account both secondary flows and turbulent diffusion as possible sources of mixing. Secondary flow velocities determine the magnitude of the convection terms in the energy redistribution equation while a turbulent diffusion coefficient determines the magnitude of the diffusion terms. Secondary flows are computed by solving a Poisson equation for a secondary streamfunction on a transversal S3-plane, whereby the right-hand side axial vorticity is composed of different contributions, each associated to a particular flow region: inviscid core flow, end-wall boundary layers, profile boundary layers and wakes. The turbulent mixing coefficient is estimated by a semi-empirical correlation. Secondary flow theory is applied to the VUB cascade testcase and comparisons are made between the computational results and the extensive experimental data available for this testcase. This comparison shows that the secondary flow computations yield reliable predictions of the secondary flow pattern, both qualitatively and quantitatively, taking into account the limitations of the model. However, the computations show that use of a uniform mixing coefficient has to be replaced by a more sophisticated approach.
Fractional poisson--a simple dose-response model for human norovirus.
Messner, Michael J; Berger, Philip; Nappier, Sharon P
2014-10-01
This study utilizes old and new Norovirus (NoV) human challenge data to model the dose-response relationship for human NoV infection. The combined data set is used to update estimates from a previously published beta-Poisson dose-response model that includes parameters for virus aggregation and for a beta-distribution that describes variable susceptibility among hosts. The quality of the beta-Poisson model is examined and a simpler model is proposed. The new model (fractional Poisson) characterizes hosts as either perfectly susceptible or perfectly immune, requiring a single parameter (the fraction of perfectly susceptible hosts) in place of the two-parameter beta-distribution. A second parameter is included to account for virus aggregation in the same fashion as it is added to the beta-Poisson model. Infection probability is simply the product of the probability of nonzero exposure (at least one virus or aggregate is ingested) and the fraction of susceptible hosts. The model is computationally simple and appears to be well suited to the data from the NoV human challenge studies. The model's deviance is similar to that of the beta-Poisson, but with one parameter, rather than two. As a result, the Akaike information criterion favors the fractional Poisson over the beta-Poisson model. At low, environmentally relevant exposure levels (<100), estimation error is small for the fractional Poisson model; however, caution is advised because no subjects were challenged at such a low dose. New low-dose data would be of great value to further clarify the NoV dose-response relationship and to support improved risk assessment for environmentally relevant exposures. © 2014 Society for Risk Analysis Published 2014. This article is a U.S. Government work and is in the public domain for the U.S.A.
NASA Technical Reports Server (NTRS)
Sorenson, R. L.; Steger, J. L.
1983-01-01
An algorithm for generating computational grids about arbitrary three-dimensional bodies is developed. The elliptic partial differential equation (PDE) approach developed by Steger and Sorenson and used in the NASA computer program GRAPE is extended from two to three dimensions. Forcing functions which are found automatically by the algorithm give the user the ability to control mesh cell size and skewness at boundary surfaces. This algorithm, as is typical of PDE grid generators, gives smooth grid lines and spacing in the interior of the grid. The method is applied to a rectilinear wind-tunnel case and to two body shapes in spherical coordinates.
Zhou, Y C; Lu, Benzhuo; Huber, Gary A; Holst, Michael J; McCammon, J Andrew
2008-01-17
The Poisson-Nernst-Planck (PNP) equation provides a continuum description of electrostatic-driven diffusion and is used here to model the diffusion and reaction of acetylcholine (ACh) with acetylcholinesterase (AChE) enzymes. This study focuses on the effects of ion and substrate concentrations on the reaction rate and rate coefficient. To this end, the PNP equations are numerically solved with a hybrid finite element and boundary element method at a wide range of ion and substrate concentrations, and the results are compared with the partially coupled Smoluchowski-Poisson-Boltzmann model. The reaction rate is found to depend strongly on the concentrations of both the substrate and ions; this is explained by the competition between the intersubstrate repulsion and the ionic screening effects. The reaction rate coefficient is independent of the substrate concentration only at very high ion concentrations, whereas at low ion concentrations the behavior of the rate depends strongly on the substrate concentration. Moreover, at physiological ion concentrations, variations in substrate concentration significantly affect the transient behavior of the reaction. Our results offer a reliable estimate of reaction rates at various conditions and imply that the concentrations of charged substrates must be coupled with the electrostatic computation to provide a more realistic description of neurotransmission and other electrodiffusion and reaction processes.
Poisson denoising on the sphere
NASA Astrophysics Data System (ADS)
Schmitt, J.; Starck, J. L.; Fadili, J.; Grenier, I.; Casandjian, J. M.
2009-08-01
In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data.
Estimating random errors due to shot noise in backscatter lidar observations.
Liu, Zhaoyan; Hunt, William; Vaughan, Mark; Hostetler, Chris; McGill, Matthew; Powell, Kathleen; Winker, David; Hu, Yongxiang
2006-06-20
We discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson- distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root mean square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF, uncertainties can be reliably calculated from or for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations lidar and tested using data from the Lidar In-space Technology Experiment.
Estimating Random Errors Due to Shot Noise in Backscatter Lidar Observations
NASA Technical Reports Server (NTRS)
Liu, Zhaoyan; Hunt, William; Vaughan, Mark A.; Hostetler, Chris A.; McGill, Matthew J.; Powell, Kathy; Winker, David M.; Hu, Yongxiang
2006-01-01
In this paper, we discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson-distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root-mean-square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF uncertainties can be reliably calculated from/for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar and tested using data from the Lidar In-space Technology Experiment (LITE). OCIS Codes:
Numerical simulations of microwave heating of liquids: enhancements using Krylov subspace methods
NASA Astrophysics Data System (ADS)
Lollchund, M. R.; Dookhitram, K.; Sunhaloo, M. S.; Boojhawon, R.
2013-04-01
In this paper, we compare the performances of three iterative solvers for large sparse linear systems arising in the numerical computations of incompressible Navier-Stokes (NS) equations. These equations are employed mainly in the simulation of microwave heating of liquids. The emphasis of this work is on the application of Krylov projection techniques such as Generalized Minimal Residual (GMRES) to solve the Pressure Poisson Equations that result from discretisation of the NS equations. The performance of the GMRES method is compared with the traditional Gauss-Seidel (GS) and point successive over relaxation (PSOR) techniques through their application to simulate the dynamics of water housed inside a vertical cylindrical vessel which is subjected to microwave radiation. It is found that as the mesh size increases, GMRES gives the fastest convergence rate in terms of computational times and number of iterations.
Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias
2013-07-25
We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01% can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.
Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias
2014-03-01
We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01 percent can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.
Poisson process stimulation of an excitable membrane cable model.
Goldfinger, M D
1986-01-01
The convergence of multiple inputs within a single-neuronal substrate is a common design feature of both peripheral and central nervous systems. Typically, the result of such convergence impinges upon an intracellularly contiguous axon, where it is encoded into a train of action potentials. The simplest representation of the result of convergence of multiple inputs is a Poisson process; a general representation of axonal excitability is the Hodgkin-Huxley/cable theory formalism. The present work addressed multiple input convergence upon an axon by applying Poisson process stimulation to the Hodgkin-Huxley axonal cable. The results showed that both absolute and relative refractory periods yielded in the axonal output a random but non-Poisson process. While smaller amplitude stimuli elicited a type of short-interval conditioning, larger amplitude stimuli elicited impulse trains approaching Poisson criteria except for the effects of refractoriness. These results were obtained for stimulus trains consisting of pulses of constant amplitude and constant or variable durations. By contrast, with or without stimulus pulse shape variability, the post-impulse conditional probability for impulse initiation in the steady-state was a Poisson-like process. For stimulus variability consisting of randomly smaller amplitudes or randomly longer durations, mean impulse frequency was attenuated or potentiated, respectively. Limitations and implications of these computations are discussed. PMID:3730505
On the validity of the Poisson assumption in sampling nanometer-sized aerosols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damit, Brian E; Wu, Dr. Chang-Yu; Cheng, Mengdawn
2014-01-01
A Poisson process is traditionally believed to apply to the sampling of aerosols. For a constant aerosol concentration, it is assumed that a Poisson process describes the fluctuation in the measured concentration because aerosols are stochastically distributed in space. Recent studies, however, have shown that sampling of micrometer-sized aerosols has non-Poissonian behavior with positive correlations. The validity of the Poisson assumption for nanometer-sized aerosols has not been examined and thus was tested in this study. Its validity was tested for four particle sizes - 10 nm, 25 nm, 50 nm and 100 nm - by sampling from indoor air withmore » a DMA- CPC setup to obtain a time series of particle counts. Five metrics were calculated from the data: pair-correlation function (PCF), time-averaged PCF, coefficient of variation, probability of measuring a concentration at least 25% greater than average, and posterior distributions from Bayesian inference. To identify departures from Poissonian behavior, these metrics were also calculated for 1,000 computer-generated Poisson time series with the same mean as the experimental data. For nearly all comparisons, the experimental data fell within the range of 80% of the Poisson-simulation values. Essentially, the metrics for the experimental data were indistinguishable from a simulated Poisson process. The greater influence of Brownian motion for nanometer-sized aerosols may explain the Poissonian behavior observed for smaller aerosols. Although the Poisson assumption was found to be valid in this study, it must be carefully applied as the results here do not definitively prove applicability in all sampling situations.« less
NASA Astrophysics Data System (ADS)
Coons, Marc P.; Herbert, John M.
2018-06-01
Widely used continuum solvation models for electronic structure calculations, including popular polarizable continuum models (PCMs), usually assume that the continuum environment is isotropic and characterized by a scalar dielectric constant, ɛ. This assumption is invalid at a liquid/vapor interface or any other anisotropic solvation environment. To address such scenarios, we introduce a more general formalism based on solution of Poisson's equation for a spatially varying dielectric function, ɛ(r). Inspired by nonequilibrium versions of PCMs, we develop a similar formalism within the context of Poisson's equation that includes the out-of-equilibrium dielectric response that accompanies a sudden change in the electron density of the solute, such as that which occurs in a vertical ionization process. A multigrid solver for Poisson's equation is developed to accommodate the large spatial grids necessary to discretize the three-dimensional electron density. We apply this methodology to compute vertical ionization energies (VIEs) of various solutes at the air/water interface and compare them to VIEs computed in bulk water, finding only very small differences between the two environments. VIEs computed using approximately two solvation shells of explicit water molecules are in excellent agreement with experiment for F-(aq), Cl-(aq), neat liquid water, and the hydrated electron, although errors for Li+(aq) and Na+(aq) are somewhat larger. Nonequilibrium corrections modify VIEs by up to 1.2 eV, relative to models based only on the static dielectric constant, and are therefore essential to obtain agreement with experiment. Given that the experiments (liquid microjet photoelectron spectroscopy) may be more sensitive to solutes situated at the air/water interface as compared to those in bulk water, our calculations provide some confidence that these experiments can indeed be interpreted as measurements of VIEs in bulk water.
Coons, Marc P; Herbert, John M
2018-06-14
Widely used continuum solvation models for electronic structure calculations, including popular polarizable continuum models (PCMs), usually assume that the continuum environment is isotropic and characterized by a scalar dielectric constant, ε. This assumption is invalid at a liquid/vapor interface or any other anisotropic solvation environment. To address such scenarios, we introduce a more general formalism based on solution of Poisson's equation for a spatially varying dielectric function, ε(r). Inspired by nonequilibrium versions of PCMs, we develop a similar formalism within the context of Poisson's equation that includes the out-of-equilibrium dielectric response that accompanies a sudden change in the electron density of the solute, such as that which occurs in a vertical ionization process. A multigrid solver for Poisson's equation is developed to accommodate the large spatial grids necessary to discretize the three-dimensional electron density. We apply this methodology to compute vertical ionization energies (VIEs) of various solutes at the air/water interface and compare them to VIEs computed in bulk water, finding only very small differences between the two environments. VIEs computed using approximately two solvation shells of explicit water molecules are in excellent agreement with experiment for F - (aq), Cl - (aq), neat liquid water, and the hydrated electron, although errors for Li + (aq) and Na + (aq) are somewhat larger. Nonequilibrium corrections modify VIEs by up to 1.2 eV, relative to models based only on the static dielectric constant, and are therefore essential to obtain agreement with experiment. Given that the experiments (liquid microjet photoelectron spectroscopy) may be more sensitive to solutes situated at the air/water interface as compared to those in bulk water, our calculations provide some confidence that these experiments can indeed be interpreted as measurements of VIEs in bulk water.
Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea
2017-11-01
Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Exact solution for the Poisson field in a semi-infinite strip.
Cohen, Yossi; Rothman, Daniel H
2017-04-01
The Poisson equation is associated with many physical processes. Yet exact analytic solutions for the two-dimensional Poisson field are scarce. Here we derive an analytic solution for the Poisson equation with constant forcing in a semi-infinite strip. We provide a method that can be used to solve the field in other intricate geometries. We show that the Poisson flux reveals an inverse square-root singularity at a tip of a slit, and identify a characteristic length scale in which a small perturbation, in a form of a new slit, is screened by the field. We suggest that this length scale expresses itself as a characteristic spacing between tips in real Poisson networks that grow in response to fluxes at tips.
Adaptive and iterative methods for simulations of nanopores with the PNP-Stokes equations
NASA Astrophysics Data System (ADS)
Mitscha-Baude, Gregor; Buttinger-Kreuzhuber, Andreas; Tulzer, Gerhard; Heitzinger, Clemens
2017-06-01
We present a 3D finite element solver for the nonlinear Poisson-Nernst-Planck (PNP) equations for electrodiffusion, coupled to the Stokes system of fluid dynamics. The model serves as a building block for the simulation of macromolecule dynamics inside nanopore sensors. The source code is released online at http://github.com/mitschabaude/nanopores. We add to existing numerical approaches by deploying goal-oriented adaptive mesh refinement. To reduce the computation overhead of mesh adaptivity, our error estimator uses the much cheaper Poisson-Boltzmann equation as a simplified model, which is justified on heuristic grounds but shown to work well in practice. To address the nonlinearity in the full PNP-Stokes system, three different linearization schemes are proposed and investigated, with two segregated iterative approaches both outperforming a naive application of Newton's method. Numerical experiments are reported on a real-world nanopore sensor geometry. We also investigate two different models for the interaction of target molecules with the nanopore sensor through the PNP-Stokes equations. In one model, the molecule is of finite size and is explicitly built into the geometry; while in the other, the molecule is located at a single point and only modeled implicitly - after solution of the system - which is computationally favorable. We compare the resulting force profiles of the electric and velocity fields acting on the molecule, and conclude that the point-size model fails to capture important physical effects such as the dependence of charge selectivity of the sensor on the molecule radius.
Poisson Spot with Magnetic Levitation
ERIC Educational Resources Information Center
Hoover, Matthew; Everhart, Michael; D'Arruda, Jose
2010-01-01
In this paper we describe a unique method for obtaining the famous Poisson spot without adding obstacles to the light path, which could interfere with the effect. A Poisson spot is the interference effect from parallel rays of light diffracting around a solid spherical object, creating a bright spot in the center of the shadow.
Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fijany, A.; Milman, M.; Redding, D.
1994-12-31
In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less
A Conway-Maxwell-Poisson (CMP) model to address data dispersion on positron emission tomography.
Santarelli, Maria Filomena; Della Latta, Daniele; Scipioni, Michele; Positano, Vincenzo; Landini, Luigi
2016-10-01
Positron emission tomography (PET) in medicine exploits the properties of positron-emitting unstable nuclei. The pairs of γ- rays emitted after annihilation are revealed by coincidence detectors and stored as projections in a sinogram. It is well known that radioactive decay follows a Poisson distribution; however, deviation from Poisson statistics occurs on PET projection data prior to reconstruction due to physical effects, measurement errors, correction of deadtime, scatter, and random coincidences. A model that describes the statistical behavior of measured and corrected PET data can aid in understanding the statistical nature of the data: it is a prerequisite to develop efficient reconstruction and processing methods and to reduce noise. The deviation from Poisson statistics in PET data could be described by the Conway-Maxwell-Poisson (CMP) distribution model, which is characterized by the centring parameter λ and the dispersion parameter ν, the latter quantifying the deviation from a Poisson distribution model. In particular, the parameter ν allows quantifying over-dispersion (ν<1) or under-dispersion (ν>1) of data. A simple and efficient method for λ and ν parameters estimation is introduced and assessed using Monte Carlo simulation for a wide range of activity values. The application of the method to simulated and experimental PET phantom data demonstrated that the CMP distribution parameters could detect deviation from the Poisson distribution both in raw and corrected PET data. It may be usefully implemented in image reconstruction algorithms and quantitative PET data analysis, especially in low counting emission data, as in dynamic PET data, where the method demonstrated the best accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Computations of Wall Distances Based on Differential Equations
NASA Technical Reports Server (NTRS)
Tucker, Paul G.; Rumsey, Chris L.; Spalart, Philippe R.; Bartels, Robert E.; Biedron, Robert T.
2004-01-01
The use of differential equations such as Eikonal, Hamilton-Jacobi and Poisson for the economical calculation of the nearest wall distance d, which is needed by some turbulence models, is explored. Modifications that could palliate some turbulence-modeling anomalies are also discussed. Economy is of especial value for deforming/adaptive grid problems. For these, ideally, d is repeatedly computed. It is shown that the Eikonal and Hamilton-Jacobi equations can be easy to implement when written in implicit (or iterated) advection and advection-diffusion equation analogous forms, respectively. These, like the Poisson Laplacian term, are commonly occurring in CFD solvers, allowing the re-use of efficient algorithms and code components. The use of the NASA CFL3D CFD program to solve the implicit Eikonal and Hamilton-Jacobi equations is explored. The re-formulated d equations are easy to implement, and are found to have robust convergence. For accurate Eikonal solutions, upwind metric differences are required. The Poisson approach is also found effective, and easiest to implement. Modified distances are not found to affect global outputs such as lift and drag significantly, at least in common situations such as airfoil flows.
Ii, Satoshi; Adib, Mohd Azrul Hisham Mohd; Watanabe, Yoshiyuki; Wada, Shigeo
2018-01-01
This paper presents a novel data assimilation method for patient-specific blood flow analysis based on feedback control theory called the physically consistent feedback control-based data assimilation (PFC-DA) method. In the PFC-DA method, the signal, which is the residual error term of the velocity when comparing the numerical and reference measurement data, is cast as a source term in a Poisson equation for the scalar potential field that induces flow in a closed system. The pressure values at the inlet and outlet boundaries are recursively calculated by this scalar potential field. Hence, the flow field is physically consistent because it is driven by the calculated inlet and outlet pressures, without any artificial body forces. As compared with existing variational approaches, although this PFC-DA method does not guarantee the optimal solution, only one additional Poisson equation for the scalar potential field is required, providing a remarkable improvement for such a small additional computational cost at every iteration. Through numerical examples for 2D and 3D exact flow fields, with both noise-free and noisy reference data as well as a blood flow analysis on a cerebral aneurysm using actual patient data, the robustness and accuracy of this approach is shown. Moreover, the feasibility of a patient-specific practical blood flow analysis is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.
On some Aitken-like acceleration of the Schwarz method
NASA Astrophysics Data System (ADS)
Garbey, M.; Tromeur-Dervout, D.
2002-12-01
In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.
Calculation of protein-ligand binding affinities.
Gilson, Michael K; Zhou, Huan-Xiang
2007-01-01
Accurate methods of computing the affinity of a small molecule with a protein are needed to speed the discovery of new medications and biological probes. This paper reviews physics-based models of binding, beginning with a summary of the changes in potential energy, solvation energy, and configurational entropy that influence affinity, and a theoretical overview to frame the discussion of specific computational approaches. Important advances are reported in modeling protein-ligand energetics, such as the incorporation of electronic polarization and the use of quantum mechanical methods. Recent calculations suggest that changes in configurational entropy strongly oppose binding and must be included if accurate affinities are to be obtained. The linear interaction energy (LIE) and molecular mechanics Poisson-Boltzmann surface area (MM-PBSA) methods are analyzed, as are free energy pathway methods, which show promise and may be ready for more extensive testing. Ultimately, major improvements in modeling accuracy will likely require advances on multiple fronts, as well as continued validation against experiment.
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne
2016-11-01
Coarse grid projection (CGP) methodology is a novel multigrid method for systems involving decoupled nonlinear evolution equations and linear elliptic equations. The nonlinear equations are solved on a fine grid and the linear equations are solved on a corresponding coarsened grid. Mapping functions transfer data between the two grids. Here we propose a version of CGP for incompressible flow computations using incremental pressure correction methods, called IFEi-CGP (implicit-time-integration, finite-element, incremental coarse grid projection). Incremental pressure correction schemes solve Poisson's equation for an intermediate variable and not the pressure itself. This fact contributes to IFEi-CGP's efficiency in two ways. First, IFEi-CGP preserves the velocity field accuracy even for a high level of pressure field grid coarsening and thus significant speedup is achieved. Second, because incremental schemes reduce the errors that arise from boundaries with artificial homogenous Neumann conditions, CGP generates undamped flows for simulations with velocity Dirichlet boundary conditions. Comparisons of the data accuracy and CPU times for the incremental-CGP versus non-incremental-CGP computations are presented.
Reference manual for the POISSON/SUPERFISH Group of Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-01-01
The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finitemore » number of points on a mesh in the plane.« less
Fast immersed interface Poisson solver for 3D unbounded problems around arbitrary geometries
NASA Astrophysics Data System (ADS)
Gillis, T.; Winckelmans, G.; Chatelain, P.
2018-02-01
We present a fast and efficient Fourier-based solver for the Poisson problem around an arbitrary geometry in an unbounded 3D domain. This solver merges two rewarding approaches, the lattice Green's function method and the immersed interface method, using the Sherman-Morrison-Woodbury decomposition formula. The method is intended to be second order up to the boundary. This is verified on two potential flow benchmarks. We also further analyse the iterative process and the convergence behavior of the proposed algorithm. The method is applicable to a wide range of problems involving a Poisson equation around inner bodies, which goes well beyond the present validation on potential flows.
Bayesian methods in reliability
NASA Astrophysics Data System (ADS)
Sander, P.; Badoux, R.
1991-11-01
The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.
A Method for Large Eddy Simulation of Acoustic Combustion Instabilities
NASA Astrophysics Data System (ADS)
Wall, Clifton; Moin, Parviz
2003-11-01
A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustics combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Additionally, new boundary conditions based on the work of Poinsot and Lele have been developed to model the acoustic effect of a long channel upstream of the computational inlet, thus avoiding the need to include such a channel in the computational domain. The turbulent combustion model used is the Level Set model of Duchamp de Lageneste and Pitsch for premixed combustion. Comparison of LES results to the reacting experiments of Besson et al. will be presented.
NASA Astrophysics Data System (ADS)
Briscese, Fabio
2017-09-01
In this paper it is argued how the dynamics of the classical Newtonian N-body system can be described in terms of the Schrödinger-Poisson equations in the large N limit. This result is based on the stochastic quantization introduced by Nelson, and on the Calogero conjecture. According to the Calogero conjecture, the emerging effective Planck constant is computed in terms of the parameters of the N-body system as \\hbar ˜ M^{5/3} G^{1/2} (N/< ρ > )^{1/6}, where is G the gravitational constant, N and M are the number and the mass of the bodies, and < ρ > is their average density. The relevance of this result in the context of large scale structure formation is discussed. In particular, this finding gives a further argument in support of the validity of the Schrödinger method as numerical double of the N-body simulations of dark matter dynamics at large cosmological scales.
Galerkin methods for Boltzmann-Poisson transport with reflection conditions on rough boundaries
NASA Astrophysics Data System (ADS)
Morales Escalante, José A.; Gamba, Irene M.
2018-06-01
We consider in this paper the mathematical and numerical modeling of reflective boundary conditions (BC) associated to Boltzmann-Poisson systems, including diffusive reflection in addition to specularity, in the context of electron transport in semiconductor device modeling at nano scales, and their implementation in Discontinuous Galerkin (DG) schemes. We study these BC on the physical boundaries of the device and develop a numerical approximation to model an insulating boundary condition, or equivalently, a pointwise zero flux mathematical condition for the electron transport equation. Such condition balances the incident and reflective momentum flux at the microscopic level, pointwise at the boundary, in the case of a more general mixed reflection with momentum dependant specularity probability p (k →). We compare the computational prediction of physical observables given by the numerical implementation of these different reflection conditions in our DG scheme for BP models, and observe that the diffusive condition influences the kinetic moments over the whole domain in position space.
Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V
2010-06-01
Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Stabilized finite element methods to simulate the conductances of ion channels
NASA Astrophysics Data System (ADS)
Tu, Bin; Xie, Yan; Zhang, Linbo; Lu, Benzhuo
2015-03-01
We have previously developed a finite element simulator, ichannel, to simulate ion transport through three-dimensional ion channel systems via solving the Poisson-Nernst-Planck equations (PNP) and Size-modified Poisson-Nernst-Planck equations (SMPNP), and succeeded in simulating some ion channel systems. However, the iterative solution between the coupled Poisson equation and the Nernst-Planck equations has difficulty converging for some large systems. One reason we found is that the NP equations are advection-dominated diffusion equations, which causes troubles in the usual FE solution. The stabilized schemes have been applied to compute fluids flow in various research fields. However, they have not been studied in the simulation of ion transport through three-dimensional models based on experimentally determined ion channel structures. In this paper, two stabilized techniques, the SUPG and the Pseudo Residual-Free Bubble function (PRFB) are introduced to enhance the numerical robustness and convergence performance of the finite element algorithm in ichannel. The conductances of the voltage dependent anion channel (VDAC) and the anthrax toxin protective antigen pore (PA) are simulated to validate the stabilization techniques. Those two stabilized schemes give reasonable results for the two proteins, with decent agreement with both experimental data and Brownian dynamics (BD) simulations. For a variety of numerical tests, it is found that the simulator effectively avoids previous numerical instability after introducing the stabilization methods. Comparison based on our test data set between the two stabilized schemes indicates both SUPG and PRFB have similar performance (the latter is slightly more accurate and stable), while SUPG is relatively more convenient to implement.
A convergent 2D finite-difference scheme for the Dirac–Poisson system and the simulation of graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brinkman, D., E-mail: Daniel.Brinkman@asu.edu; School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ 85287; Heitzinger, C., E-mail: Clemens.Heitzinger@asu.edu
2014-01-15
We present a convergent finite-difference scheme of second order in both space and time for the 2D electromagnetic Dirac equation. We apply this method in the self-consistent Dirac–Poisson system to the simulation of graphene. The model is justified for low energies, where the particles have wave vectors sufficiently close to the Dirac points. In particular, we demonstrate that our method can be used to calculate solutions of the Dirac–Poisson system where potentials act as beam splitters or Veselago lenses.
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
Kinetics of the electric double layer formation modelled by the finite difference method
NASA Astrophysics Data System (ADS)
Valent, Ivan
2017-11-01
Dynamics of the elctric double layer formation in 100 mM NaCl solution for sudden potentail steps of 10 and 20 mV was simulated using the Poisson-Nernst-Planck theory and VLUGR2 solver for partial differential equations. The used approach was verified by comparing the obtained steady-state solution with the available exact solution. The simulations allowed for detailed analysis of the relaxation processes of the individual ions and the electric potential. Some computational aspects of the problem were discussed.
Assessment of autonomic response by broad-band respiration
NASA Technical Reports Server (NTRS)
Berger, R. D.; Saul, J. P.; Cohen, R. J.
1989-01-01
We present a technique for introducing broad-band respiratory perturbations so that the response characteristics of the autonomic nervous system can be determined noninvasively over a wide range of physiologically relevant frequencies. A subject's respiratory bandwidth was broadened by breathing on cue to a sequence of audible tones spaced by Poisson intervals. The transfer function between the respiratory input and the resulting instantaneous heart rate was then computed using spectral analysis techniques. Results using this method are comparable to those found using traditional techniques, but are obtained with an economy of data collection.
Elastic properties of rigid fiber-reinforced composites
NASA Astrophysics Data System (ADS)
Chen, J.; Thorpe, M. F.; Davis, L. C.
1995-05-01
We study the elastic properties of rigid fiber-reinforced composites with perfect bonding between fibers and matrix, and also with sliding boundary conditions. In the dilute region, there exists an exact analytical solution. Around the rigidity threshold we find the elastic moduli and Poisson's ratio by decomposing the deformation into a compression mode and a rotation mode. For perfect bonding, both modes are important, whereas only the compression mode is operative for sliding boundary conditions. We employ the digital-image-based method and a finite element analysis to perform computer simulations which confirm our analytical predictions.
NASA Astrophysics Data System (ADS)
Boyarshinov, Michael G.; Vaismana, Yakov I.
2016-10-01
The following methods were used in order to identify the pollution fields of urban air caused by the motor transport exhaust gases: the mathematical model, which enables to consider the influence of the main factors that determine pollution fields formation in the complex spatial domain; the authoring software designed for computational modeling of the gas flow, generated by numerous mobile point sources; the results of computing experiments on pollutant spread analysis and evolution of their concentration fields. The computational model of exhaust gas distribution and dispersion in a spatial domain, which includes urban buildings, structures and main traffic arteries, takes into account a stochastic character of cars apparition on the borders of the examined territory and uses a Poisson process. The model also considers the traffic lights switching and permits to define the fields of velocity, pressure and temperature of the discharge gases in urban air. The verification of mathematical model and software used confirmed their satisfactory fit to the in-situ measurements data and the possibility to use the obtained computing results for assessment and prediction of urban air pollution caused by motor transport exhaust gases.
Stable and Spectrally Accurate Schemes for the Navier-Stokes Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, Jun; Liu, Jie
2011-01-01
In this paper, we present an accurate, efficient and stable numerical method for the incompressible Navier-Stokes equations (NSEs). The method is based on (1) an equivalent pressure Poisson equation formulation of the NSE with proper pressure boundary conditions, which facilitates the design of high-order and stable numerical methods, and (2) the Krylov deferred correction (KDC) accelerated method of lines transpose (mbox MoL{sup T}), which is very stable, efficient, and of arbitrary order in time. Numerical tests with known exact solutions in three dimensions show that the new method is spectrally accurate in time, and a numerical order of convergence 9more » was observed. Two-dimensional computational results of flow past a cylinder and flow in a bifurcated tube are also reported.« less
Nonlinear Poisson Equation for Heterogeneous Media
Hu, Langhua; Wei, Guo-Wei
2012-01-01
The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. PMID:22947937
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yongge; Xu, Wei, E-mail: weixu@nwpu.edu.cn; Yang, Guidong
The Poisson white noise, as a typical non-Gaussian excitation, has attracted much attention recently. However, little work was referred to the study of stochastic systems with fractional derivative under Poisson white noise excitation. This paper investigates the stationary response of a class of quasi-linear systems with fractional derivative excited by Poisson white noise. The equivalent stochastic system of the original stochastic system is obtained. Then, approximate stationary solutions are obtained with the help of the perturbation method. Finally, two typical examples are discussed in detail to demonstrate the effectiveness of the proposed method. The analysis also shows that the fractionalmore » order and the fractional coefficient significantly affect the responses of the stochastic systems with fractional derivative.« less
Computation of the dipole moments of proteins.
Antosiewicz, J
1995-10-01
A simple and computationally feasible procedure for the calculation of net charges and dipole moments of proteins at arbitrary pH and salt conditions is described. The method is intended to provide data that may be compared to the results of transient electric dichroism experiments on protein solutions. The procedure consists of three major steps: (i) calculation of self energies and interaction energies for ionizable groups in the protein by using the finite-difference Poisson-Boltzmann method, (ii) determination of the position of the center of diffusion (to which the calculated dipole moment refers) and the extinction coefficient tensor for the protein, and (iii) generation of the equilibrium distribution of protonation states of the protein by a Monte Carlo procedure, from which mean and root-mean-square dipole moments and optical anisotropies are calculated. The procedure is applied to 12 proteins. It is shown that it gives hydrodynamic and electrical parameters for proteins in good agreement with experimental data.
Parallel multigrid smoothing: polynomial versus Gauss-Seidel
NASA Astrophysics Data System (ADS)
Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray
2003-07-01
Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.
Monitoring Poisson observations using combined applications of Shewhart and EWMA charts
NASA Astrophysics Data System (ADS)
Abujiya, Mu'azu Ramat
2017-11-01
The Shewhart and exponentially weighted moving average (EWMA) charts for nonconformities are the most widely used procedures of choice for monitoring Poisson observations in modern industries. Individually, the Shewhart EWMA charts are only sensitive to large and small shifts, respectively. To enhance the detection abilities of the two schemes in monitoring all kinds of shifts in Poisson count data, this study examines the performance of combined applications of the Shewhart, and EWMA Poisson control charts. Furthermore, the study proposes modifications based on well-structured statistical data collection technique, ranked set sampling (RSS), to detect shifts in the mean of a Poisson process more quickly. The relative performance of the proposed Shewhart-EWMA Poisson location charts is evaluated in terms of the average run length (ARL), standard deviation of the run length (SDRL), median run length (MRL), average ratio ARL (ARARL), average extra quadratic loss (AEQL) and performance comparison index (PCI). Consequently, all the new Poisson control charts based on RSS method are generally more superior than most of the existing schemes for monitoring Poisson processes. The use of these combined Shewhart-EWMA Poisson charts is illustrated with an example to demonstrate the practical implementation of the design procedure.
A System of Poisson Equations for a Nonconstant Varadhan Functional on a Finite State Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavazos-Cadena, Rolando; Hernandez-Hernandez, Daniel
2006-01-15
Given a discrete-time Markov chain with finite state space and a stationary transition matrix, a system of 'local' Poisson equations characterizing the (exponential) Varadhan's functional J(.) is given. The main results, which are derived for an arbitrary transition structure so that J(.) may be nonconstant, are as follows: (i) Any solution to the local Poisson equations immediately renders Varadhan's functional, and (ii) a solution of the system always exist. The proof of this latter result is constructive and suggests a method to solve the local Poisson equations.
Marginalized zero-inflated Poisson models with missing covariates.
Benecha, Habtamu K; Preisser, John S; Divaris, Kimon; Herring, Amy H; Das, Kalyan
2018-05-11
Unlike zero-inflated Poisson regression, marginalized zero-inflated Poisson (MZIP) models for counts with excess zeros provide estimates with direct interpretations for the overall effects of covariates on the marginal mean. In the presence of missing covariates, MZIP and many other count data models are ordinarily fitted using complete case analysis methods due to lack of appropriate statistical methods and software. This article presents an estimation method for MZIP models with missing covariates. The method, which is applicable to other missing data problems, is illustrated and compared with complete case analysis by using simulations and dental data on the caries preventive effects of a school-based fluoride mouthrinse program. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik
2014-12-01
Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation approach for frailty Cox models based on the penalized partial likelihood. The simulation study showed good performance for the Poisson maximum likelihood approach with Gaussian quadrature and biased variance component estimates for both the Poisson maximum likelihood with Laplace approximation and penalized partial likelihood approaches. Copyright © 2014. Published by Elsevier B.V.
High order solution of Poisson problems with piecewise constant coefficients and interface jumps
NASA Astrophysics Data System (ADS)
Marques, Alexandre Noll; Nave, Jean-Christophe; Rosales, Rodolfo Ruben
2017-04-01
We present a fast and accurate algorithm to solve Poisson problems in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson problems with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson problems in rectangular domains-which requires the BIM solution at interfaces/boundaries only. These Poisson problems involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high order of accuracy) with finite differences and a Fast Fourier Transform based fast Poisson solver. We present 2-D examples of the algorithm applied to Poisson problems involving complex geometries, including cases in which the solution is discontinuous. We show that the algorithm produces solutions that converge with either 3rd or 4th order of accuracy, depending on the type of boundary condition and solution discontinuity.
Poisson-Gaussian Noise Analysis and Estimation for Low-Dose X-ray Images in the NSCT Domain.
Lee, Sangyoon; Lee, Min Seok; Kang, Moon Gi
2018-03-29
The noise distribution of images obtained by X-ray sensors in low-dosage situations can be analyzed using the Poisson and Gaussian mixture model. Multiscale conversion is one of the most popular noise reduction methods used in recent years. Estimation of the noise distribution of each subband in the multiscale domain is the most important factor in performing noise reduction, with non-subsampled contourlet transform (NSCT) representing an effective method for scale and direction decomposition. In this study, we use artificially generated noise to analyze and estimate the Poisson-Gaussian noise of low-dose X-ray images in the NSCT domain. The noise distribution of the subband coefficients is analyzed using the noiseless low-band coefficients and the variance of the noisy subband coefficients. The noise-after-transform also follows a Poisson-Gaussian distribution, and the relationship between the noise parameters of the subband and the full-band image is identified. We then analyze noise of actual images to validate the theoretical analysis. Comparison of the proposed noise estimation method with an existing noise reduction method confirms that the proposed method outperforms traditional methods.
NASA Astrophysics Data System (ADS)
Becker, Matthew Rand
I present a new algorithm, CALCLENS, for efficiently computing weak gravitational lensing shear signals from large N-body light cone simulations over a curved sky. This new algorithm properly accounts for the sky curvature and boundary conditions, is able to produce redshift- dependent shear signals including corrections to the Born approximation by using multiple- plane ray tracing, and properly computes the lensed images of source galaxies in the light cone. The key feature of this algorithm is a new, computationally efficient Poisson solver for the sphere that combines spherical harmonic transform and multigrid methods. As a result, large areas of sky (~10,000 square degrees) can be ray traced efficiently at high-resolution using only a few hundred cores. Using this new algorithm and curved-sky calculations that only use a slower but more accurate spherical harmonic transform Poisson solver, I study the convergence, shear E-mode, shear B-mode and rotation mode power spectra. Employing full-sky E/B-mode decompositions, I confirm that the numerically computed shear B-mode and rotation mode power spectra are equal at high accuracy ( ≲ 1%) as expected from perturbation theory up to second order. Coupled with realistic galaxy populations placed in large N-body light cone simulations, this new algorithm is ideally suited for the construction of synthetic weak lensing shear catalogs to be used to test for systematic effects in data analysis procedures for upcoming large-area sky surveys. The implementation presented in this work, written in C and employing widely available software libraries to maintain portability, is publicly available at http://code.google.com/p/calclens.
NASA Astrophysics Data System (ADS)
Becker, Matthew R.
2013-10-01
I present a new algorithm, Curved-sky grAvitational Lensing for Cosmological Light conE simulatioNS (CALCLENS), for efficiently computing weak gravitational lensing shear signals from large N-body light cone simulations over a curved sky. This new algorithm properly accounts for the sky curvature and boundary conditions, is able to produce redshift-dependent shear signals including corrections to the Born approximation by using multiple-plane ray tracing and properly computes the lensed images of source galaxies in the light cone. The key feature of this algorithm is a new, computationally efficient Poisson solver for the sphere that combines spherical harmonic transform and multigrid methods. As a result, large areas of sky (˜10 000 square degrees) can be ray traced efficiently at high resolution using only a few hundred cores. Using this new algorithm and curved-sky calculations that only use a slower but more accurate spherical harmonic transform Poisson solver, I study the convergence, shear E-mode, shear B-mode and rotation mode power spectra. Employing full-sky E/B-mode decompositions, I confirm that the numerically computed shear B-mode and rotation mode power spectra are equal at high accuracy (≲1 per cent) as expected from perturbation theory up to second order. Coupled with realistic galaxy populations placed in large N-body light cone simulations, this new algorithm is ideally suited for the construction of synthetic weak lensing shear catalogues to be used to test for systematic effects in data analysis procedures for upcoming large-area sky surveys. The implementation presented in this work, written in C and employing widely available software libraries to maintain portability, is publicly available at http://code.google.com/p/calclens.
Extending fields in a level set method by solving a biharmonic equation
NASA Astrophysics Data System (ADS)
Moroney, Timothy J.; Lusmore, Dylan R.; McCue, Scott W.; McElwain, D. L. Sean
2017-08-01
We present an approach for computing extensions of velocities or other fields in level set methods by solving a biharmonic equation. The approach differs from other commonly used approaches to velocity extension because it deals with the interface fully implicitly through the level set function. No explicit properties of the interface, such as its location or the velocity on the interface, are required in computing the extension. These features lead to a particularly simple implementation using either a sparse direct solver or a matrix-free conjugate gradient solver. Furthermore, we propose a fast Poisson preconditioner that can be used to accelerate the convergence of the latter. We demonstrate the biharmonic extension on a number of test problems that serve to illustrate its effectiveness at producing smooth and accurate extensions near interfaces. A further feature of the method is the natural way in which it deals with symmetry and periodicity, ensuring through its construction that the extension field also respects these symmetries.
Determination of elastic stresses in gas-turbine disks
NASA Technical Reports Server (NTRS)
Manson, S S
1947-01-01
A method is presented for the calculation of elastic stresses in symmetrical disks typical of those of a high-temperature gas turbine. The method is essentially a finite-difference solution of the equilibrium and compatibility equations for elastic stresses in a symmetrical disk. Account can be taken of point-to-point variations in disk thickness, in temperature, in elastic modulus, in coefficient of thermal expansion, in material density, and in Poisson's ratio. No numerical integration or trial-and-error procedures are involved and the computations can be performed in rapid and routine fashion by nontechnical computers with little engineering supervision. Checks on problems for which exact mathematical solutions are known indicate that the method yields results of high accuracy. Illustrative examples are presented to show the manner of treating solid disks, disks with central holes, and disks constructed either of a single material or two or more welded materials. The effect of shrink fitting is taken into account by a very simple device.
A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions
NASA Astrophysics Data System (ADS)
Liang, Yihao; Xing, Xiangjun; Li, Yaohang
2017-06-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.
NASA Astrophysics Data System (ADS)
Jabbari, Ali
2018-01-01
Surface inset permanent magnet DC machine can be used as an alternative in automation systems due to their high efficiency and robustness. Magnet segmentation is a common technique in order to mitigate pulsating torque components in permanent magnet machines. An accurate computation of air-gap magnetic field distribution is necessary in order to calculate machine performance. An exact analytical method for magnetic vector potential calculation in surface inset permanent magnet machines considering magnet segmentation has been proposed in this paper. The analytical method is based on the resolution of Laplace and Poisson equations as well as Maxwell equation in polar coordinate by using sub-domain method. One of the main contributions of the paper is to derive an expression for the magnetic vector potential in the segmented PM region by using hyperbolic functions. The developed method is applied on the performance computation of two prototype surface inset magnet segmented motors with open circuit and on load conditions. The results of these models are validated through FEM method.
NASA Astrophysics Data System (ADS)
Kang, Seokkoo; Borazjani, Iman; Sotiropoulos, Fotis
2008-11-01
Unsteady 3D simulations of flows in natural streams is a challenging task due to the complexity of the bathymetry, the shallowness of the flow, and the presence of multiple nature- and man-made obstacles. This work is motivated by the need to develop a powerful numerical method for simulating such flows using coherent-structure-resolving turbulence models. We employ the curvilinear immersed boundary method of Ge and Sotiropoulos (Journal of Computational Physics, 2007) and address the critical issue of numerical efficiency in large aspect ratio computational domains and grids such as those encountered in long and shallow open channels. We show that the matrix-free Newton-Krylov method for solving the momentum equations coupled with an algebraic multigrid method with incomplete LU preconditioner for solving the Poisson equation yield a robust and efficient procedure for obtaining time-accurate solutions in such problems. We demonstrate the potential of the numerical approach by carrying out a direct numerical simulation of flow in a long and shallow meandering stream with multiple hydraulic structures.
Vectorized multigrid Poisson solver for the CDC CYBER 205
NASA Technical Reports Server (NTRS)
Barkai, D.; Brandt, M. A.
1984-01-01
The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.
Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoneking, M.R.; Den Hartog, D.J.
1996-06-01
The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less
A mixed fluid-kinetic solver for the Vlasov-Poisson equations
NASA Astrophysics Data System (ADS)
Cheng, Yongtao
Plasmas are ionized gases that appear in a wide range of applications including astrophysics and space physics, as well as in laboratory settings such as in magnetically confined fusion. There are two prevailing types of modeling strategies to describe a plasma system: kinetic models and fluid models. Kinetic models evolve particle probability density distributions (PDFs) in phase space, which are accurate but computationally expensive. Fluid models evolve a small number of moments of the distribution function and reduce the dimension of the solution. However, some approximation is necessary to close the system, and finding an accurate moment closure that correctly captures the dynamics away from thermodynamic equilibrium is a difficult and still open problem. The main contributions of the present work can be divided into two main parts: (1) a new class of moment closures, based on a modification of existing quadrature-based moment-closure methods, is developed using bi-B-spline and bi-bubble representations; and (2) a novel mixed solver that combines a fluid and a kinetic solver is proposed, which uses the new class of moment-closure methods described in the first part. For the newly developed quadrature-based moment-closure based on bi-B-spline and bi-bubble representation, the explicit form of flux terms and the moment-realizability conditions are given. It is shown that while the bi-delta system is weakly hyperbolic, the newly proposed fluid models are strongly hyperbolic. Using a high-order Runge-Kutta discontinuous Galerkin method together with Strang operator splitting, the resulting models are applied to the Vlasov-Poisson-Fokker-Planck system in the high field limit. In the second part of this work, results from kinetic solver are used to provide a corrected closure to the fluid model. This correction keeps the fluid model hyperbolic and gives fluid results that match the moments as computed from the kinetic solution. Furthermore, a prolongation operation based on the bi-bubble moment-closure is used to make the first few moments of the kinetic and fluid solvers match. This results in a kinetic solver that exactly conserves mass and total energy. This mixed fluid-kinetic solver is applied to standard test problems for the Vlasov-Poisson system, including two-stream-instability problem and Landau damping.
Efficient three-dimensional Poisson solvers in open rectangular conducting pipe
NASA Astrophysics Data System (ADS)
Qiang, Ji
2016-06-01
Three-dimensional (3D) Poisson solver plays an important role in the study of space-charge effects on charged particle beam dynamics in particle accelerators. In this paper, we propose three new 3D Poisson solvers for a charged particle beam in an open rectangular conducting pipe. These three solvers include a spectral integrated Green function (IGF) solver, a 3D spectral solver, and a 3D integrated Green function solver. These solvers effectively handle the longitudinal open boundary condition using a finite computational domain that contains the beam itself. This saves the computational cost of using an extra larger longitudinal domain in order to set up an appropriate finite boundary condition. Using an integrated Green function also avoids the need to resolve rapid variation of the Green function inside the beam. The numerical operational cost of the spectral IGF solver and the 3D IGF solver scales as O(N log(N)) , where N is the number of grid points. The cost of the 3D spectral solver scales as O(Nn N) , where Nn is the maximum longitudinal mode number. We compare these three solvers using several numerical examples and discuss the advantageous regime of each solver in the physical application.
2013-01-01
Background Demographic bottlenecks can severely reduce the genetic variation of a population or a species. Establishing whether low genetic variation is caused by a bottleneck or a constantly low effective number of individuals is important to understand a species’ ecology and evolution, and it has implications for conservation management. Recent studies have evaluated the power of several statistical methods developed to identify bottlenecks. However, the false positive rate, i.e. the rate with which a bottleneck signal is misidentified in demographically stable populations, has received little attention. We analyse this type of error (type I) in forward computer simulations of stable populations having greater than Poisson variance in reproductive success (i.e., variance in family sizes). The assumption of Poisson variance underlies bottleneck tests, yet it is commonly violated in species with high fecundity. Results With large variance in reproductive success (Vk ≥ 40, corresponding to a ratio between effective and census size smaller than 0.1), tests based on allele frequencies, allelic sizes, and DNA sequence polymorphisms (heterozygosity excess, M-ratio, and Tajima’s D test) tend to show erroneous signals of a bottleneck. Similarly, strong evidence of population decline is erroneously detected when ancestral and current population sizes are estimated with the model based method MSVAR. Conclusions Our results suggest caution when interpreting the results of bottleneck tests in species showing high variance in reproductive success. Particularly in species with high fecundity, computer simulations are recommended to confirm the occurrence of a population bottleneck. PMID:24131797
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
Li, Ruipeng; Saad, Yousef
2017-08-01
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ruipeng; Saad, Yousef
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Noisy cooperative intermittent processes: From blinking quantum dots to human consciousness
NASA Astrophysics Data System (ADS)
Allegrini, Paolo; Paradisi, Paolo; Menicucci, Danilo; Bedini, Remo; Gemignani, Angelo; Fronzoni, Leone
2011-07-01
We study the superposition of a non-Poisson renewal process with the presence of a superimposed Poisson noise. The non-Poisson renewals mark the passage between meta-stable states in system with self-organization. We propose methods to measure the amount of information due to the two independent processes independently, and we see that a superficial study based on the survival probabilities yield stretched-exponential relaxations. Our method is in fact able to unravel the inverse-power law relaxation of the isolated non-Poisson processes, even when noise is present. We provide examples of this behavior in system of diverse nature, from blinking nano-crystals to weak turbulence. Finally we focus our discussion on events extracted from human electroencephalograms, and we discuss their connection with emerging properties of integrated neural dynamics, i.e. consciousness.
Tensorial Basis Spline Collocation Method for Poisson's Equation
NASA Astrophysics Data System (ADS)
Plagne, Laurent; Berthou, Jean-Yves
2000-01-01
This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.
Iterative methods for plasma sheath calculations: Application to spherical probe
NASA Technical Reports Server (NTRS)
Parker, L. W.; Sullivan, E. C.
1973-01-01
The computer cost of a Poisson-Vlasov iteration procedure for the numerical solution of a steady-state collisionless plasma-sheath problem depends on: (1) the nature of the chosen iterative algorithm, (2) the position of the outer boundary of the grid, and (3) the nature of the boundary condition applied to simulate a condition at infinity (as in three-dimensional probe or satellite-wake problems). Two iterative algorithms, in conjunction with three types of boundary conditions, are analyzed theoretically and applied to the computation of current-voltage characteristics of a spherical electrostatic probe. The first algorithm was commonly used by physicists, and its computer costs depend primarily on the boundary conditions and are only slightly affected by the mesh interval. The second algorithm is not commonly used, and its costs depend primarily on the mesh interval and slightly on the boundary conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, H.; Li, G., E-mail: gli@clemson.edu
2014-08-28
An accelerated Finite Element Contact Block Reduction (FECBR) approach is presented for computational analysis of ballistic transport in nanoscale electronic devices with arbitrary geometry and unstructured mesh. Finite element formulation is developed for the theoretical CBR/Poisson model. The FECBR approach is accelerated through eigen-pair reduction, lead mode space projection, and component mode synthesis techniques. The accelerated FECBR is applied to perform quantum mechanical ballistic transport analysis of a DG-MOSFET with taper-shaped extensions and a DG-MOSFET with Si/SiO{sub 2} interface roughness. The computed electrical transport properties of the devices obtained from the accelerated FECBR approach and associated computational cost as amore » function of system degrees of freedom are compared with those obtained from the original CBR and direct inversion methods. The performance of the accelerated FECBR in both its accuracy and efficiency is demonstrated.« less
Computers and the design of ion beam optical systems
NASA Astrophysics Data System (ADS)
White, Nicholas R.
Advances in microcomputers have made it possible to maintain a library of advanced ion optical programs which can be used on inexpensive computer hardware, which are suitable for the design of a variety of ion beam systems including ion implanters, giving excellent results. This paper describes in outline the steps typically involved in designing a complete ion beam system for materials modification applications. Two computer programs are described which, although based largely on algorithms which have been in use for many years, make possible detailed beam optical calculations using microcomputers, specifically the IBM PC. OPTICIAN is an interactive first-order program for tracing beam envelopes through complex optical systems. SORCERY is a versatile program for solving Laplace's and Poisson's equations by finite difference methods using successive over-relaxation. Ion and electron trajectories can be traced through these potential fields, and plots of beam emittance obtained.
Nonlinear Poisson equation for heterogeneous media.
Hu, Langhua; Wei, Guo-Wei
2012-08-22
The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Itô and Stratonovich integrals on compound renewal processes: the normal/Poisson case
NASA Astrophysics Data System (ADS)
Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.
2010-06-01
Continuous-time random walks, or compound renewal processes, are pure-jump stochastic processes with several applications in insurance, finance, economics and physics. Based on heuristic considerations, a definition is given for stochastic integrals driven by continuous-time random walks, which includes the Itô and Stratonovich cases. It is then shown how the definition can be used to compute these two stochastic integrals by means of Monte Carlo simulations. Our example is based on the normal compound Poisson process, which in the diffusive limit converges to the Wiener process.
NASA Technical Reports Server (NTRS)
Mullenmeister, Paul
1988-01-01
The quasi-geostrophic omega-equation in flux form is developed as an example of a Poisson problem over a spherical shell. Solutions of this equation are obtained by applying a two-parameter Chebyshev solver in vector layout for CDC 200 series computers. The performance of this vectorized algorithm greatly exceeds the performance of its scalar analog. The algorithm generates solutions of the omega-equation which are compared with the omega fields calculated with the aid of the mass continuity equation.
Statistical image reconstruction from correlated data with applications to PET
Alessio, Adam; Sauer, Ken; Kinahan, Paul
2008-01-01
Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576
Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Wei
Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equationsmore » such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Yang; Xiao, Jianyuan; Zhang, Ruili
Hamiltonian time integrators for the Vlasov-Maxwell equations are developed by a Hamiltonian splitting technique. The Hamiltonian functional is split into five parts, which produces five exactly solvable subsystems. Each subsystem is a Hamiltonian system equipped with the Morrison-Marsden-Weinstein Poisson bracket. Compositions of the exact solutions provide Poisson structure preserving/Hamiltonian methods of arbitrary high order for the Vlasov-Maxwell equations. They are then accurate and conservative over a long time because of the Poisson-preserving nature.
Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y
2012-01-01
A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.
Lin, I-Chun; Xing, Dajun; Shapley, Robert
2014-01-01
One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. PMID:22684587
Lin, I-Chun; Xing, Dajun; Shapley, Robert
2012-12-01
One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.
An Artificial Neural Networks Method for Solving Partial Differential Equations
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2010-09-01
While there already exists many analytical and numerical techniques for solving PDEs, this paper introduces an approach using artificial neural networks. The approach consists of a technique developed by combining the standard numerical method, finite-difference, with the Hopfield neural network. The method is denoted Hopfield-finite-difference (HFD). The architecture of the nets, energy function, updating equations, and algorithms are developed for the method. The HFD method has been used successfully to approximate the solution of classical PDEs, such as the Wave, Heat, Poisson and the Diffusion equations, and on a system of PDEs. The software Matlab is used to obtain the results in both tabular and graphical form. The results are similar in terms of accuracy to those obtained by standard numerical methods. In terms of speed, the parallel nature of the Hopfield nets methods makes them easier to implement on fast parallel computers while some numerical methods need extra effort for parallelization.
Numerical simulation using vorticity-vector potential formulation
NASA Technical Reports Server (NTRS)
Tokunaga, Hiroshi
1993-01-01
An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the computational method. We present the computational method and apply the present method to computations of flows in a square cavity at large Reynolds number in order to investigate its effectiveness.
Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
ERIC Educational Resources Information Center
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
A fictitious domain approach for the Stokes problem based on the extended finite element method
NASA Astrophysics Data System (ADS)
Court, Sébastien; Fournié, Michel; Lozinski, Alexei
2014-01-01
In the present work, we propose to extend to the Stokes problem a fictitious domain approach inspired by eXtended Finite Element Method and studied for Poisson problem in [Renard]. The method allows computations in domains whose boundaries do not match. A mixed finite element method is used for fluid flow. The interface between the fluid and the structure is localized by a level-set function. Dirichlet boundary conditions are taken into account using Lagrange multiplier. A stabilization term is introduced to improve the approximation of the normal trace of the Cauchy stress tensor at the interface and avoid the inf-sup condition between the spaces for velocity and the Lagrange multiplier. Convergence analysis is given and several numerical tests are performed to illustrate the capabilities of the method.
NASA Technical Reports Server (NTRS)
Bratanow, T.; Aksu, H.; Spehert, T.
1975-01-01
A method based on the Navier-Stokes equations was developed for analyzing the unsteady incompressible viscous flow around oscillating airfoils at high Reynolds numbers. The Navier-Stokes equations have been integrated in their classical Helmholtz vorticity transport equation form, and the instantaneous velocity field at each time step was determined by the solution of Poisson's equation. A refined finite element was utilized to allow for a conformable solution of the stream function and its first space derivatives at the element interfaces. A corresponding set of accurate boundary conditions was applied; thus obtaining a rigorous solution for the velocity field. The details of the computational procedure and examples of computed results describing the unsteady flow characteristics around the airfoil are presented.
Uncertainty based pressure reconstruction from velocity measurement with generalized least squares
NASA Astrophysics Data System (ADS)
Zhang, Jiacheng; Scalo, Carlo; Vlachos, Pavlos
2017-11-01
A method using generalized least squares reconstruction of instantaneous pressure field from velocity measurement and velocity uncertainty is introduced and applied to both planar and volumetric flow data. Pressure gradients are computed on a staggered grid from flow acceleration. The variance-covariance matrix of the pressure gradients is evaluated from the velocity uncertainty by approximating the pressure gradient error to a linear combination of velocity errors. An overdetermined system of linear equations which relates the pressure and the computed pressure gradients is formulated and then solved using generalized least squares with the variance-covariance matrix of the pressure gradients. By comparing the reconstructed pressure field against other methods such as solving the pressure Poisson equation, the omni-directional integration, and the ordinary least squares reconstruction, generalized least squares method is found to be more robust to the noise in velocity measurement. The improvement on pressure result becomes more remarkable when the velocity measurement becomes less accurate and more heteroscedastic. The uncertainty of the reconstructed pressure field is also quantified and compared across the different methods.
Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.
Zhang, Jiachao; Hirakawa, Keigo
2017-04-01
This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.
Ion-Conserving Modified Poisson-Boltzmann Theory Considering a Steric Effect in an Electrolyte
NASA Astrophysics Data System (ADS)
Sugioka, Hideyuki
2016-12-01
The modified Poisson-Nernst-Planck (MPNP) and modified Poisson-Boltzmann (MPB) equations are well known as fundamental equations that consider a steric effect, which prevents unphysical ion concentrations. However, it is unclear whether they are equivalent or not. To clarify this problem, we propose an improved free energy formulation that considers a steric limit with an ion-conserving condition and successfully derive the ion-conserving modified Poisson-Boltzmann (IC-MPB) equations that are equivalent to the MPNP equations. Furthermore, we numerically examine the equivalence by comparing between the IC-MPB solutions obtained by the Newton method and the steady MPNP solutions obtained by the finite-element finite-volume method. A surprising aspect of our finding is that the MPB solutions are much different from the MPNP (IC-MPB) solutions in a confined space. We consider that our findings will significantly contribute to understanding the surface science between solids and liquids.
Deterministic multidimensional nonuniform gap sampling.
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.
Stable computations with flat radial basis functions using vector-valued rational approximations
NASA Astrophysics Data System (ADS)
Wright, Grady B.; Fornberg, Bengt
2017-02-01
One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.
Two-Dimensional Grids About Airfoils and Other Shapes
NASA Technical Reports Server (NTRS)
Sorenson, R.
1982-01-01
GRAPE computer program generates two-dimensional finite-difference grids about airfoils and other shapes by use of Poisson differential equation. GRAPE can be used with any boundary shape, even one specified by tabulated points and including limited number of sharp corners. Numerically stable and computationally fast, GRAPE provides aerodynamic analyst with efficient and consistant means of grid generation.
NASA Astrophysics Data System (ADS)
Li, Mingchao; Han, Shuai; Zhou, Sibao; Zhang, Ye
2018-06-01
Based on a 3D model of a discrete fracture network (DFN) in a rock mass, an improved projective method for computing the 3D mechanical connectivity rate was proposed. The Monte Carlo simulation method, 2D Poisson process and 3D geological modeling technique were integrated into a polyhedral DFN modeling approach, and the simulation results were verified by numerical tests and graphical inspection. Next, the traditional projective approach for calculating the rock mass connectivity rate was improved using the 3D DFN models by (1) using the polyhedral model to replace the Baecher disk model; (2) taking the real cross section of the rock mass, rather than a part of the cross section, as the test plane; and (3) dynamically searching the joint connectivity rates using different dip directions and dip angles at different elevations to calculate the maximum, minimum and average values of the joint connectivity at each elevation. In a case study, the improved method and traditional method were used to compute the mechanical connectivity rate of the slope of a dam abutment. The results of the two methods were further used to compute the cohesive force of the rock masses. Finally, a comparison showed that the cohesive force derived from the traditional method had a higher error, whereas the cohesive force derived from the improved method was consistent with the suggested values. According to the comparison, the effectivity and validity of the improved method were verified indirectly.
Normal and compound poisson approximations for pattern occurrences in NGS reads.
Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu
2012-06-01
Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/∼fsun/Programs/NGS_motif_power/NGS_motif_power.html). In addition, Supplementary Material can be found online (www.liebertonline.com/cmb).
Automated symbolic calculations in nonequilibrium thermodynamics
NASA Astrophysics Data System (ADS)
Kröger, Martin; Hütter, Markus
2010-12-01
We cast the Jacobi identity for continuous fields into a local form which eliminates the need to perform any partial integration to the expense of performing variational derivatives. This allows us to test the Jacobi identity definitely and efficiently and to provide equations between different components defining a potential Poisson bracket. We provide a simple Mathematica TM notebook which allows to perform this task conveniently, and which offers some additional functionalities of use within the framework of nonequilibrium thermodynamics: reversible equations of change for fields, and the conservation of entropy during the reversible dynamics. Program summaryProgram title: Poissonbracket.nb Catalogue identifier: AEGW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 227 952 No. of bytes in distributed program, including test data, etc.: 268 918 Distribution format: tar.gz Programming language: Mathematica TM 7.0 Computer: Any computer running Mathematica TM 6.0 and later versions Operating system: Linux, MacOS, Windows RAM: 100 Mb Classification: 4.2, 5, 23 Nature of problem: Testing the Jacobi identity can be a very complex task depending on the structure of the Poisson bracket. The Mathematica TM notebook provided here solves this problem using a novel symbolic approach based on inherent properties of the variational derivative, highly suitable for the present tasks. As a by product, calculations performed with the Poisson bracket assume a compact form. Solution method: The problem is first cast into a form which eliminates the need to perform partial integration for arbitrary functionals at the expense of performing variational derivatives. The corresponding equations are conveniently obtained using the symbolic programming environment Mathematica TM. Running time: For the test cases and most typical cases in the literature, the running time is of the order of seconds or minutes, respectively.
NASA Astrophysics Data System (ADS)
Friedrich, J.
1999-08-01
As lecturers, our main concern and goal is to develop more attractive and efficient ways of communicating up-to-date scientific knowledge to our students and facilitate an in-depth understanding of physical phenomena. Computer-based instruction is very promising to help both teachers and learners in their difficult task, which involves complex cognitive psychological processes. This complexity is reflected in high demands on the design and implementation methods used to create computer-assisted learning (CAL) programs. Due to their concepts, flexibility, maintainability and extended library resources, object-oriented modeling techniques are very suitable to produce this type of pedagogical tool. Computational fluid dynamics (CFD) enjoys not only a growing importance in today's research, but is also very powerful for teaching and learning fluid dynamics. For this purpose, an educational PC program for university level called 'CFDLab 1.1' for Windows™ was developed with an interactive graphical user interface (GUI) for multitasking and point-and-click operations. It uses the dual reciprocity boundary element method as a versatile numerical scheme, allowing to handle a variety of relevant governing equations in two dimensions on personal computers due to its simple pre- and postprocessing including 2D Laplace, Poisson, diffusion, transient convection-diffusion.
Fuzzy classifier based support vector regression framework for Poisson ratio determination
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa
2013-09-01
Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.
NASA Technical Reports Server (NTRS)
Kraft, Ralph P.; Burrows, David N.; Nousek, John A.
1991-01-01
Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.
NASA Technical Reports Server (NTRS)
Chang, S. C.
1986-01-01
An algorithm for solving a large class of two- and three-dimensional nonseparable elliptic partial differential equations (PDE's) is developed and tested. It uses a modified D'Yakanov-Gunn iterative procedure in which the relaxation factor is grid-point dependent. It is easy to implement and applicable to a variety of boundary conditions. It is also computationally efficient, as indicated by the results of numerical comparisons with other established methods. Furthermore, the current algorithm has the advantage of possessing two important properties which the traditional iterative methods lack; that is: (1) the convergence rate is relatively insensitive to grid-cell size and aspect ratio, and (2) the convergence rate can be easily estimated by using the coefficient of the PDE being solved.
NASA Astrophysics Data System (ADS)
Tóth, B.; Lillo, F.; Farmer, J. D.
2010-11-01
We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
Minois, Nathan; Savy, Stéphanie; Lauwers-Cances, Valérie; Andrieu, Sandrine; Savy, Nicolas
2017-03-01
Recruiting patients is a crucial step of a clinical trial. Estimation of the trial duration is a question of paramount interest. Most techniques are based on deterministic models and various ad hoc methods neglecting the variability in the recruitment process. To overpass this difficulty the so-called Poisson-gamma model has been introduced involving, for each centre, a recruitment process modelled by a Poisson process whose rate is assumed constant in time and gamma-distributed. The relevancy of this model has been widely investigated. In practice, rates are rarely constant in time, there are breaks in recruitment (for instance week-ends or holidays). Such information can be collected and included in a model considering piecewise constant rate functions yielding to an inhomogeneous Cox model. The estimation of the trial duration is much more difficult. Three strategies of computation of the expected trial duration are proposed considering all the breaks, considering only large breaks and without considering breaks. The bias of these estimations procedure are assessed by means of simulation studies considering three scenarios of breaks simulation. These strategies yield to estimations with a very small bias. Moreover, the strategy with the best performances in terms of prediction and with the smallest bias is the one which does not take into account of breaks. This result is important as, in practice, collecting breaks data is pretty hard to manage.
Fan, Donglei; Li, Minggang; Qiu, Jian; Xing, Haiping; Jiang, Zhiwei; Tang, Tao
2018-05-31
Auxetic materials are a class of materials possessing negative Poisson's ratio. Here we establish a novel method for preparing auxetic foam from closed-cell polymer foam based on steam penetration and condensation (SPC) process. Using polyethylene (PE) closed-cell foam as an example, the resultant foams treated by SPC process present negative Poisson's ratio during stretching and compression testing. The effect of steam-treated temperature and time on the conversion efficiency of negative Poisson's ratio foam is investigated, and the mechanism of SPC method for forming re-entrant structure is discussed. The results indicate that the presence of enough steam within the cells is a critical factor for the negative Poisson's ratio conversion in the SPC process. The pressure difference caused by steam condensation is the driving force for the conversion from conventional closed-cell foam to the negative Poisson's ratio foam. Furthermore, the applicability of SPC process for fabricating auxetic foam is studied by replacing PE foam by polyvinyl chloride (PVC) foam with closed-cell structure or replacing water steam by ethanol steam. The results verify the universality of SPC process for fabricating auxetic foams from conventional foams with closed-cell structure. In addition, we explored potential application of the obtained auxetic foams by SPC process in the fabrication of shape memory polymer materials.
Poisson traces, D-modules, and symplectic resolutions
NASA Astrophysics Data System (ADS)
Etingof, Pavel; Schedler, Travis
2018-03-01
We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.
Poisson traces, D-modules, and symplectic resolutions.
Etingof, Pavel; Schedler, Travis
2018-01-01
We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.
A new multivariate zero-adjusted Poisson model with applications to biomedicine.
Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen
2018-05-25
Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Cao, Lu; Verbeek, Fons J.
2012-03-01
In computer graphics and visualization, reconstruction of a 3D surface from a point cloud is an important research area. As the surface contains information that can be measured, i.e. expressed in features, the application of surface reconstruction can be potentially important for application in bio-imaging. Opportunities in this application area are the motivation for this study. In the past decade, a number of algorithms for surface reconstruction have been proposed. Generally speaking, these methods can be separated into two categories: i.e., explicit representation and implicit approximation. Most of the aforementioned methods are firmly based in theory; however, so far, no analytical evaluation between these methods has been presented. The straightforward way of evaluation has been by convincing through visual inspection. Through evaluation we search for a method that can precisely preserve the surface characteristics and that is robust in the presence of noise. The outcome will be used to improve reliability in surface reconstruction of biological models. We, therefore, use an analytical approach by selecting features as surface descriptors and measure these features in varying conditions. We selected surface distance, surface area and surface curvature as three major features to compare quality of the surface created by the different algorithms. Our starting point has been ground truth values obtained from analytical shapes such as the sphere and the ellipsoid. In this paper we present four classical surface reconstruction methods from the two categories mentioned above, i.e. the Power Crust, the Robust Cocone, the Fourier-based method and the Poisson reconstruction method. The results obtained from our experiments indicate that Poisson reconstruction method performs the best in the presence of noise.
Kawaguchi, Minato; Mino, Hiroyuki; Durand, Dominique M
2006-01-01
This article presents an analysis of the information transmission of periodic sub-threshold spike trains in a hippocampal CA1 neuron model in the presence of a homogeneous Poisson shot noise. In the computer simulation, periodic sub-threshold spike trains were presented repeatedly to the midpoint of the main apical branch, while the homogeneous Poisson shot noise was applied to the mid-point of a basal dendrite in the CA1 neuron model consisting of the soma with one sodium, one calcium, and five potassium channels. From spike firing times recorded at the soma, the inter spike intervals were generated and then the probability, p(T), of the inter-spike interval histogram corresponding to the spike interval, r, of the periodic input spike trains was estimated to obtain an index of information transmission. In the present article, it is shown that at a specific amplitude of the homogeneous Poisson shot noise, p(T) was found to be maximized, as well as the possibility to encode the periodic sub-threshold spike trains became greater. It was implied that setting the amplitude of the homogeneous Poisson shot noise to the specific values which maximize the information transmission might contribute to efficiently encoding the periodic sub-threshold spike trains by utilizing the stochastic resonance.
The accurate solution of Poisson's equation by expansion in Chebyshev polynomials
NASA Technical Reports Server (NTRS)
Haidvogel, D. B.; Zang, T.
1979-01-01
A Chebyshev expansion technique is applied to Poisson's equation on a square with homogeneous Dirichlet boundary conditions. The spectral equations are solved in two ways - by alternating direction and by matrix diagonalization methods. Solutions are sought to both oscillatory and mildly singular problems. The accuracy and efficiency of the Chebyshev approach compare favorably with those of standard second- and fourth-order finite-difference methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schrottke, L., E-mail: lutz@pdi-berlin.de; Lü, X.; Grahn, H. T.
We present a self-consistent model for carrier transport in periodic semiconductor heterostructures completely formulated in the Fourier domain. In addition to the Hamiltonian for the layer system, all expressions for the scattering rates, the applied electric field, and the carrier distribution are treated in reciprocal space. In particular, for slowly converging cases of the self-consistent solution of the Schrödinger and Poisson equations, numerous transformations between real and reciprocal space during the iterations can be avoided by using the presented method, which results in a significant reduction of computation time. Therefore, it is a promising tool for the simulation and efficientmore » design of complex heterostructures such as terahertz quantum-cascade lasers.« less
NASA Astrophysics Data System (ADS)
Harmon, Michael; Gamba, Irene M.; Ren, Kui
2016-12-01
This work concerns the numerical solution of a coupled system of self-consistent reaction-drift-diffusion-Poisson equations that describes the macroscopic dynamics of charge transport in photoelectrochemical (PEC) solar cells with reactive semiconductor and electrolyte interfaces. We present three numerical algorithms, mainly based on a mixed finite element and a local discontinuous Galerkin method for spatial discretization, with carefully chosen numerical fluxes, and implicit-explicit time stepping techniques, for solving the time-dependent nonlinear systems of partial differential equations. We perform computational simulations under various model parameters to demonstrate the performance of the proposed numerical algorithms as well as the impact of these parameters on the solution to the model.
On the Determination of Poisson Statistics for Haystack Radar Observations of Orbital Debris
NASA Technical Reports Server (NTRS)
Stokely, Christopher L.; Benbrook, James R.; Horstman, Matt
2007-01-01
A convenient and powerful method is used to determine if radar detections of orbital debris are observed according to Poisson statistics. This is done by analyzing the time interval between detection events. For Poisson statistics, the probability distribution of the time interval between events is shown to be an exponential distribution. This distribution is a special case of the Erlang distribution that is used in estimating traffic loads on telecommunication networks. Poisson statistics form the basis of many orbital debris models but the statistical basis of these models has not been clearly demonstrated empirically until now. Interestingly, during the fiscal year 2003 observations with the Haystack radar in a fixed staring mode, there are no statistically significant deviations observed from that expected with Poisson statistics, either independent or dependent of altitude or inclination. One would potentially expect some significant clustering of events in time as a result of satellite breakups, but the presence of Poisson statistics indicates that such debris disperse rapidly with respect to Haystack's very narrow radar beam. An exception to Poisson statistics is observed in the months following the intentional breakup of the Fengyun satellite in January 2007.
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
Accurate, robust and reliable calculations of Poisson-Boltzmann binding energies
Nguyen, Duc D.; Wang, Bao
2017-01-01
Poisson-Boltzmann (PB) model is one of the most popular implicit solvent models in biophysical modeling and computation. The ability of providing accurate and reliable PB estimation of electrostatic solvation free energy, ΔGel, and binding free energy, ΔΔGel, is important to computational biophysics and biochemistry. In this work, we investigate the grid dependence of our PB solver (MIBPB) with SESs for estimating both electrostatic solvation free energies and electrostatic binding free energies. It is found that the relative absolute error of ΔGel obtained at the grid spacing of 1.0 Å compared to ΔGel at 0.2 Å averaged over 153 molecules is less than 0.2%. Our results indicate that the use of grid spacing 0.6 Å ensures accuracy and reliability in ΔΔGel calculation. In fact, the grid spacing of 1.1 Å appears to deliver adequate accuracy for high throughput screening. PMID:28211071
Modifying Poisson equation for near-solute dielectric polarization and solvation free energy
NASA Astrophysics Data System (ADS)
Yang, Pei-Kun
2016-06-01
The dielectric polarization P is important for calculating the stability of protein conformation and the binding affinity of protein-protein/ligand interactions and for exploring the nonthermal effect of an external electric field on biomolecules. P was decomposed into the product of the electric dipole moment per molecule p; bulk solvent density Nbulk; and relative solvent molecular density g. For a molecular solute, 4πr2p(r) oscillates with the distance r to the solute, and g(r) has a large peak in the near-solute region, as observed in molecular dynamics (MD) simulations. Herein, the Poisson equation was modified for computing p based on the modified Gauss's law of Maxwell's equations, and the potential of the mean force was used for computing g. For one or two charged atoms in a water cluster, the solvation free energies of the solutes obtained by these equations were similar to those obtained from MD simulations.
Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry
NASA Technical Reports Server (NTRS)
Hong, Yie-Ming
1973-01-01
Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.
High-order continuum kinetic method for modeling plasma dynamics in phase space
Vogman, G. V.; Colella, P.; Shumlak, U.
2014-12-15
Continuum methods offer a high-fidelity means of simulating plasma kinetics. While computationally intensive, these methods are advantageous because they can be cast in conservation-law form, are not susceptible to noise, and can be implemented using high-order numerical methods. Advances in continuum method capabilities for modeling kinetic phenomena in plasmas require the development of validation tools in higher dimensional phase space and an ability to handle non-cartesian geometries. To that end, a new benchmark for validating Vlasov-Poisson simulations in 3D (x,v x,v y) is presented. The benchmark is based on the Dory-Guest-Harris instability and is successfully used to validate a continuummore » finite volume algorithm. To address challenges associated with non-cartesian geometries, unique features of cylindrical phase space coordinates are described. Preliminary results of continuum kinetic simulations in 4D (r,z,v r,v z) phase space are presented.« less
The stress analysis method for three-dimensional composite materials
NASA Astrophysics Data System (ADS)
Nagai, Kanehiro; Yokoyama, Atsushi; Maekawa, Zen'ichiro; Hamada, Hiroyuki
1994-05-01
This study proposes a stress analysis method for three-dimensionally fiber reinforced composite materials. In this method, the rule-of mixture for composites is successfully applied to 3-D space in which material properties would change 3-dimensionally. The fundamental formulas for Young's modulus, shear modulus, and Poisson's ratio are derived. Also, we discuss a strength estimation and an optimum material design technique for 3-D composite materials. The analysis is executed for a triaxial orthogonally woven fabric, and their results are compared to the experimental data in order to verify the accuracy of this method. The present methodology can be easily understood with basic material mechanics and elementary mathematics, so it enables us to write a computer program of this theory without difficulty. Furthermore, this method can be applied to various types of 3-D composites because of its general-purpose characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, School of Mathematical Science, MOELSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Shu, Ruiwen, E-mail: rshu2@math.wisc.edu
In this paper we consider a kinetic-fluid model for disperse two-phase flows with uncertainty. We propose a stochastic asymptotic-preserving (s-AP) scheme in the generalized polynomial chaos stochastic Galerkin (gPC-sG) framework, which allows the efficient computation of the problem in both kinetic and hydrodynamic regimes. The s-AP property is proved by deriving the equilibrium of the gPC version of the Fokker–Planck operator. The coefficient matrices that arise in a Helmholtz equation and a Poisson equation, essential ingredients of the algorithms, are proved to be positive definite under reasonable and mild assumptions. The computation of the gPC version of a translation operatormore » that arises in the inversion of the Fokker–Planck operator is accelerated by a spectrally accurate splitting method. Numerical examples illustrate the s-AP property and the efficiency of the gPC-sG method in various asymptotic regimes.« less
The scaling of oblique plasma double layers
NASA Technical Reports Server (NTRS)
Borovsky, J. E.
1983-01-01
Strong oblique plasma double layers are investigated using three methods, i.e., electrostatic particle-in-cell simulations, numerical solutions to the Poisson-Vlasov equations, and analytical approximations to the Poisson-Vlasov equations. The solutions to the Poisson-Vlasov equations and numerical simulations show that strong oblique double layers scale in terms of Debye lengths. For very large potential jumps, theory and numerical solutions indicate that all effects of the magnetic field vanish and the oblique double layers follow the same scaling relation as the field-aligned double layers.
Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.
Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah
2012-01-01
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.
Lee, Soohyun; Seo, Chae Hwa; Alver, Burak Han; Lee, Sanghyuk; Park, Peter J
2015-09-03
RNA-seq has been widely used for genome-wide expression profiling. RNA-seq data typically consists of tens of millions of short sequenced reads from different transcripts. However, due to sequence similarity among genes and among isoforms, the source of a given read is often ambiguous. Existing approaches for estimating expression levels from RNA-seq reads tend to compromise between accuracy and computational cost. We introduce a new approach for quantifying transcript abundance from RNA-seq data. EMSAR (Estimation by Mappability-based Segmentation And Reclustering) groups reads according to the set of transcripts to which they are mapped and finds maximum likelihood estimates using a joint Poisson model for each optimal set of segments of transcripts. The method uses nearly all mapped reads, including those mapped to multiple genes. With an efficient transcriptome indexing based on modified suffix arrays, EMSAR minimizes the use of CPU time and memory while achieving accuracy comparable to the best existing methods. EMSAR is a method for quantifying transcripts from RNA-seq data with high accuracy and low computational cost. EMSAR is available at https://github.com/parklab/emsar.
Application of Poisson random effect models for highway network screening.
Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer
2014-02-01
In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.
A dictionary learning approach for Poisson image deblurring.
Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong
2013-07-01
The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.
Atomistic models for free energy evaluation of drug binding to membrane proteins.
Durdagi, S; Zhao, C; Cuervo, J E; Noskov, S Y
2011-01-01
The binding of various molecules to integral membrane proteins with optimal affinity and specificity is central to normal function of cell. While membrane proteins represent about one third of the whole cell proteome, they are a majority of common drug targets. The quest for the development of computational models capable of accurate evaluation of binding affinities, decomposition of the binding into its principal components and thus mapping molecular mechanisms of binding remains one of the main goals of modern computational biophysics and related drug development. The primary scope of this review will be on the recent extension of computational methods for the study of drug binding to membrane proteins. Several examples of such applications will be provided ranging from secondary transporters to voltage gated channels. In this mini-review, we will provide a short summary on the breadth of different methods for binding affinity evaluation. These methods include molecular docking with docking scoring functions, molecular dynamics (MD) simulations combined with post-processing analysis using Molecular Mechanics/Poisson Boltzmann (Generalized Born) Surface Area (MM/PB(GB)SA), as well as direct evaluation of free energies from Free Energy Perturbation (FEP) with constraining schemes, and Potential of Mean Force (PMF) computations. We will compare advantages and shortcomings of popular techniques and provide discussion on the integrative strategies for drug development aimed at targeting membrane proteins.
NASA Astrophysics Data System (ADS)
Blommel, Thomas; Wagner, Alexander J.
2018-02-01
We examine a new kind of lattice gas that closely resembles modern lattice Boltzmann methods. This new kind of lattice gas, which we call a Monte Carlo lattice gas, has interesting properties that shed light on the origin of the multirelaxation time collision operator, and it derives the equilibrium distribution for an entropic lattice Boltzmann. Furthermore these lattice gas methods have Galilean invariant fluctuations given by a Poisson statistics, giving further insight into the properties that we should expect for fluctuating lattice Boltzmann methods.
Beyond single-stream with the Schrödinger method
NASA Astrophysics Data System (ADS)
Uhlemann, Cora; Kopp, Michael
2016-10-01
We investigate large scale structure formation of collisionless dark matter in the phase space description based on the Vlasov-Poisson equation. We present the Schrödinger method, originally proposed by \\cite{WK93} as numerical technique based on the Schrödinger Poisson equation, as an analytical tool which is superior to the common standard pressureless fluid model. Whereas the dust model fails and develops singularities at shell crossing the Schrödinger method encompasses multi-streaming and even virialization.
Chaudhry, Jehanzeb Hameed; Comer, Jeffrey; Aksimentiev, Aleksei; Olson, Luke N.
2013-01-01
The conventional Poisson-Nernst-Planck equations do not account for the finite size of ions explicitly. This leads to solutions featuring unrealistically high ionic concentrations in the regions subject to external potentials, in particular, near highly charged surfaces. A modified form of the Poisson-Nernst-Planck equations accounts for steric effects and results in solutions with finite ion concentrations. Here, we evaluate numerical methods for solving the modified Poisson-Nernst-Planck equations by modeling electric field-driven transport of ions through a nanopore. We describe a novel, robust finite element solver that combines the applications of the Newton's method to the nonlinear Galerkin form of the equations, augmented with stabilization terms to appropriately handle the drift-diffusion processes. To make direct comparison with particle-based simulations possible, our method is specifically designed to produce solutions under periodic boundary conditions and to conserve the number of ions in the solution domain. We test our finite element solver on a set of challenging numerical experiments that include calculations of the ion distribution in a volume confined between two charged plates, calculations of the ionic current though a nanopore subject to an external electric field, and modeling the effect of a DNA molecule on the ion concentration and nanopore current. PMID:24363784
Neves-Petersen, Maria Teresa; Petersen, Steffen B
2003-01-01
The molecular understanding of the initial interaction between a protein and, e.g., its substrate, a surface or an inhibitor is essentially an understanding of the role of electrostatics in intermolecular interactions. When studying biomolecules it is becoming increasingly evident that electrostatic interactions play a role in folding, conformational stability, enzyme activity and binding energies as well as in protein-protein interactions. In this chapter we present the key basic equations of electrostatics necessary to derive the equations used to model electrostatic interactions in biomolecules. We will also address how to solve such equations. This chapter is divided into two major sections. In the first part we will review the basic Maxwell equations of electrostatics equations called the Laws of Electrostatics that combined will result in the Poisson equation. This equation is the starting point of the Poisson-Boltzmann (PB) equation used to model electrostatic interactions in biomolecules. Concepts as electric field lines, equipotential surfaces, electrostatic energy and when can electrostatics be applied to study interactions between charges will be addressed. In the second part we will arrive at the electrostatic equations for dielectric media such as a protein. We will address the theory of dielectrics and arrive at the Poisson equation for dielectric media and at the PB equation, the main equation used to model electrostatic interactions in biomolecules (e.g., proteins, DNA). It will be shown how to compute forces and potentials in a dielectric medium. In order to solve the PB equation we will present the continuum electrostatic models, namely the Tanford-Kirkwood and the modified Tandord-Kirkwood methods. Priority will be given to finding the protonation state of proteins prior to solving the PB equation. We also present some methods that can be used to map and study the electrostatic potential distribution on the molecular surface of proteins. The combination of graphical visualisation of the electrostatic fields combined with knowledge about the location of key residues on the protein surface allows us to envision atomic models for enzyme function. Finally, we exemplify the use of some of these methods on the enzymes of the lipase family.
Christensen, A L; Lundbye-Christensen, S; Dethlefsen, C
2011-12-01
Several statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards' estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates. Based on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough[13]. Applying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model. This simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method
NASA Astrophysics Data System (ADS)
Prahutama, Alan; Sudarno
2018-05-01
The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).
TDIGG - TWO-DIMENSIONAL, INTERACTIVE GRID GENERATION CODE
NASA Technical Reports Server (NTRS)
Vu, B. T.
1994-01-01
TDIGG is a fast and versatile program for generating two-dimensional computational grids for use with finite-difference flow-solvers. Both algebraic and elliptic grid generation systems are included. The method for grid generation by algebraic transformation is based on an interpolation algorithm and the elliptic grid generation is established by solving the partial differential equation (PDE). Non-uniform grid distributions are carried out using a hyperbolic tangent stretching function. For algebraic grid systems, interpolations in one direction (univariate) and two directions (bivariate) are considered. These interpolations are associated with linear or cubic Lagrangian/Hermite/Bezier polynomial functions. The algebraic grids can subsequently be smoothed using an elliptic solver. For elliptic grid systems, the PDE can be in the form of Laplace (zero forcing function) or Poisson. The forcing functions in the Poisson equation come from the boundary or the entire domain of the initial algebraic grids. A graphics interface procedure using the Silicon Graphics (GL) Library is included to allow users to visualize the grid variations at each iteration. This will allow users to interactively modify the grid to match their applications. TDIGG is written in FORTRAN 77 for Silicon Graphics IRIS series computers running IRIX. This package requires either MIT's X Window System, Version 11 Revision 4 or SGI (Motif) Window System. A sample executable is provided on the distribution medium. It requires 148K of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. This program was developed in 1992.
Lord, Dominique; Guikema, Seth D; Geedipally, Srinivas Reddy
2008-05-01
This paper documents the application of the Conway-Maxwell-Poisson (COM-Poisson) generalized linear model (GLM) for modeling motor vehicle crashes. The COM-Poisson distribution, originally developed in 1962, has recently been re-introduced by statisticians for analyzing count data subjected to over- and under-dispersion. This innovative distribution is an extension of the Poisson distribution. The objectives of this study were to evaluate the application of the COM-Poisson GLM for analyzing motor vehicle crashes and compare the results with the traditional negative binomial (NB) model. The comparison analysis was carried out using the most common functional forms employed by transportation safety analysts, which link crashes to the entering flows at intersections or on segments. To accomplish the objectives of the study, several NB and COM-Poisson GLMs were developed and compared using two datasets. The first dataset contained crash data collected at signalized four-legged intersections in Toronto, Ont. The second dataset included data collected for rural four-lane divided and undivided highways in Texas. Several methods were used to assess the statistical fit and predictive performance of the models. The results of this study show that COM-Poisson GLMs perform as well as NB models in terms of GOF statistics and predictive performance. Given the fact the COM-Poisson distribution can also handle under-dispersed data (while the NB distribution cannot or has difficulties converging), which have sometimes been observed in crash databases, the COM-Poisson GLM offers a better alternative over the NB model for modeling motor vehicle crashes, especially given the important limitations recently documented in the safety literature about the latter type of model.
NASA Astrophysics Data System (ADS)
AllahTavakoli, Yahya; Safari, Abdolreza; Vaníček, Petr
2016-12-01
This paper resurrects a version of Poisson's Partial Differential Equation (PDE) associated with the gravitational field at the Earth's surface and illustrates how the PDE possesses a capability to extract the mass density of Earth's topography from land-based gravity data. Herein, first we propound a theorem which mathematically introduces this version of Poisson's PDE adapted for the Earth's surface and then we use this PDE to develop a method of approximating the terrain mass density. Also, we carry out a real case study showing how the proposed approach is able to be applied to a set of land-based gravity data. In the case study, the method is summarized by an algorithm and applied to a set of gravity stations located along a part of the north coast of the Persian Gulf in the south of Iran. The results were numerically validated via rock-samplings as well as a geological map. Also, the method was compared with two conventional methods of mass density reduction. The numerical experiments indicate that the Poisson PDE at the Earth's surface has the capability to extract the mass density from land-based gravity data and is able to provide an alternative and somewhat more precise method of estimating the terrain mass density.
Measuring Poisson Ratios at Low Temperatures
NASA Technical Reports Server (NTRS)
Boozon, R. S.; Shepic, J. A.
1987-01-01
Simple extensometer ring measures bulges of specimens in compression. New method of measuring Poisson's ratio used on brittle ceramic materials at cryogenic temperatures. Extensometer ring encircles cylindrical specimen. Four strain gauges connected in fully active Wheatstone bridge self-temperature-compensating. Used at temperatures as low as liquid helium.
Haraldsson, Henrik; Kefayati, Sarah; Ahn, Sinyeob; Dyverfeldt, Petter; Lantz, Jonas; Karlsson, Matts; Laub, Gerhard; Ebbers, Tino; Saloner, David
2018-04-01
To measure the Reynolds stress tensor using 4D flow MRI, and to evaluate its contribution to computed pressure maps. A method to assess both velocity and Reynolds stress using 4D flow MRI is presented and evaluated. The Reynolds stress is compared by cross-sectional integrals of the Reynolds stress invariants. Pressure maps are computed using the pressure Poisson equation-both including and neglecting the Reynolds stress. Good agreement is seen for Reynolds stress between computational fluid dynamics, simulated MRI, and MRI experiment. The Reynolds stress can significantly influence the computed pressure loss for simulated (eg, -0.52% vs -15.34% error; P < 0.001) and experimental (eg, 306 ± 11 vs 203 ± 6 Pa; P < 0.001) data. A 54% greater pressure loss is seen at the highest experimental flow rate when accounting for Reynolds stress (P < 0.001). 4D flow MRI with extended motion-encoding enables quantification of both the velocity and the Reynolds stress tensor. The additional information provided by this method improves the assessment of pressure gradients across a stenosis in the presence of turbulence. Unlike conventional methods, which are only valid if the flow is laminar, the proposed method is valid for both laminar and disturbed flow, a common presentation in diseased vessels. Magn Reson Med 79:1962-1971, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Dynamics of a prey-predator system under Poisson white noise excitation
NASA Astrophysics Data System (ADS)
Pan, Shan-Shan; Zhu, Wei-Qiu
2014-10-01
The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.
Count distribution for mixture of two exponentials as renewal process duration with applications
NASA Astrophysics Data System (ADS)
Low, Yeh Ching; Ong, Seng Huat
2016-06-01
A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.
Stochastic Investigation of Natural Frequency for Functionally Graded Plates
NASA Astrophysics Data System (ADS)
Karsh, P. K.; Mukhopadhyay, T.; Dey, S.
2018-03-01
This paper presents the stochastic natural frequency analysis of functionally graded plates by applying artificial neural network (ANN) approach. Latin hypercube sampling is utilised to train the ANN model. The proposed algorithm for stochastic natural frequency analysis of FGM plates is validated and verified with original finite element method and Monte Carlo simulation (MCS). The combined stochastic variation of input parameters such as, elastic modulus, shear modulus, Poisson ratio, and mass density are considered. Power law is applied to distribute the material properties across the thickness. The present ANN model reduces the sample size and computationally found efficient as compared to conventional Monte Carlo simulation.
NASA Astrophysics Data System (ADS)
Pandey, Preeti; Srivastava, Rakesh; Bandyopadhyay, Pradipta
2018-03-01
The relative performance of MM-PBSA and MM-3D-RISM methods to estimate the binding free energy of protein-ligand complexes is investigated by applying these to three proteins (Dihydrofolate Reductase, Catechol-O-methyltransferase, and Stromelysin-1) differing in the number of metal ions they contain. None of the computational methods could distinguish all the ligands based on their calculated binding free energies (as compared to experimental values). The difference between the two comes from both polar and non-polar part of solvation. For charged ligand case, MM-PBSA and MM-3D-RISM give a qualitatively different result for the polar part of solvation.
A statistical approach for inferring the 3D structure of the genome.
Varoquaux, Nelle; Ay, Ferhat; Noble, William Stafford; Vert, Jean-Philippe
2014-06-15
Recent technological advances allow the measurement, in a single Hi-C experiment, of the frequencies of physical contacts among pairs of genomic loci at a genome-wide scale. The next challenge is to infer, from the resulting DNA-DNA contact maps, accurate 3D models of how chromosomes fold and fit into the nucleus. Many existing inference methods rely on multidimensional scaling (MDS), in which the pairwise distances of the inferred model are optimized to resemble pairwise distances derived directly from the contact counts. These approaches, however, often optimize a heuristic objective function and require strong assumptions about the biophysics of DNA to transform interaction frequencies to spatial distance, and thereby may lead to incorrect structure reconstruction. We propose a novel approach to infer a consensus 3D structure of a genome from Hi-C data. The method incorporates a statistical model of the contact counts, assuming that the counts between two loci follow a Poisson distribution whose intensity decreases with the physical distances between the loci. The method can automatically adjust the transfer function relating the spatial distance to the Poisson intensity and infer a genome structure that best explains the observed data. We compare two variants of our Poisson method, with or without optimization of the transfer function, to four different MDS-based algorithms-two metric MDS methods using different stress functions, a non-metric version of MDS and ChromSDE, a recently described, advanced MDS method-on a wide range of simulated datasets. We demonstrate that the Poisson models reconstruct better structures than all MDS-based methods, particularly at low coverage and high resolution, and we highlight the importance of optimizing the transfer function. On publicly available Hi-C data from mouse embryonic stem cells, we show that the Poisson methods lead to more reproducible structures than MDS-based methods when we use data generated using different restriction enzymes, and when we reconstruct structures at different resolutions. A Python implementation of the proposed method is available at http://cbio.ensmp.fr/pastis. © The Author 2014. Published by Oxford University Press.
Cao, Lili; Caldararu, Octav; Ryde, Ulf
2017-09-07
Nitrogenase is the only enzyme that can break the triple bond in N 2 to form two molecules of ammonia. The enzyme has been thoroughly studied with both experimental and computational methods, but there is still no consensus regarding the atomic details of the reaction mechanism. In the most common form, the active site is a MoFe 7 S 9 C(homocitrate) cluster. The homocitrate ligand contains one alcohol and three carboxylate groups. In water solution, the triply deprotonated form dominates, but because the alcohol (and one of the carboxylate groups) coordinate to the Mo ion, this may change in the enzyme. We have performed a series of computational calculations with molecular dynamics (MD), quantum mechanical (QM) cluster, combined QM and molecular mechanics (QM/MM), QM/MM with Poisson-Boltzmann and surface area solvation, QM/MM thermodynamic cycle perturbations, and quantum refinement methods to settle the most probable protonation state of the homocitrate ligand in nitrogenase. The results quite conclusively point out a triply deprotonated form (net charge -3) with a proton shared between the alcohol and one of the carboxylate groups as the most stable at pH 7. Moreover, we have studied eight ionizable protein residues close to the active site with MD simulations and determined the most likely protonation states.
NASA Astrophysics Data System (ADS)
Liu, Shun; Xu, Jinglei; Yu, Kaikai
2017-06-01
This paper proposes an improved approach for extraction of pressure fields from velocity data, such as obtained by particle image velocimetry (PIV), especially for steady compressible flows with strong shocks. The principle of this approach is derived from Navier-Stokes equations, assuming adiabatic condition and neglecting viscosity of flow field boundaries measured by PIV. The computing method is based on MacCormack's technique in computational fluid dynamics. Thus, this approach is called the MacCormack method. Moreover, the MacCormack method is compared with several approaches proposed in previous literature, including the isentropic method, the spatial integration and the Poisson method. The effects of velocity error level and PIV spatial resolution on these approaches are also quantified by using artificial velocity data containing shock waves. The results demonstrate that the MacCormack method has higher reconstruction accuracy than other approaches, and its advantages become more remarkable with shock strengthening. Furthermore, the performance of the MacCormack method is also validated by using synthetic PIV images with an oblique shock wave, confirming the feasibility and advantage of this approach in real PIV experiments. This work is highly significant for the studies on aerospace engineering, especially the outer flow fields of supersonic aircraft and the internal flow fields of ramjets.
Chen, Duan; Wei, Guo-Wei
2010-01-01
The miniaturization of nano-scale electronic devices, such as metal oxide semiconductor field effect transistors (MOSFETs), has given rise to a pressing demand in the new theoretical understanding and practical tactic for dealing with quantum mechanical effects in integrated circuits. Modeling and simulation of this class of problems have emerged as an important topic in applied and computational mathematics. This work presents mathematical models and computational algorithms for the simulation of nano-scale MOSFETs. We introduce a unified two-scale energy functional to describe the electrons and the continuum electrostatic potential of the nano-electronic device. This framework enables us to put microscopic and macroscopic descriptions in an equal footing at nano scale. By optimization of the energy functional, we derive consistently-coupled Poisson-Kohn-Sham equations. Additionally, layered structures are crucial to the electrostatic and transport properties of nano transistors. A material interface model is proposed for more accurate description of the electrostatics governed by the Poisson equation. Finally, a new individual dopant model that utilizes the Dirac delta function is proposed to understand the random doping effect in nano electronic devices. Two mathematical algorithms, the matched interface and boundary (MIB) method and the Dirichlet-to-Neumann mapping (DNM) technique, are introduced to improve the computational efficiency of nano-device simulations. Electronic structures are computed via subband decomposition and the transport properties, such as the I-V curves and electron density, are evaluated via the non-equilibrium Green's functions (NEGF) formalism. Two distinct device configurations, a double-gate MOSFET and a four-gate MOSFET, are considered in our three-dimensional numerical simulations. For these devices, the current fluctuation and voltage threshold lowering effect induced by the discrete dopant model are explored. Numerical convergence and model well-posedness are also investigated in the present work. PMID:20396650
NASA Astrophysics Data System (ADS)
Lundberg, J.; Conrad, J.; Rolke, W.; Lopez, A.
2010-03-01
A C++ class was written for the calculation of frequentist confidence intervals using the profile likelihood method. Seven combinations of Binomial, Gaussian, Poissonian and Binomial uncertainties are implemented. The package provides routines for the calculation of upper and lower limits, sensitivity and related properties. It also supports hypothesis tests which take uncertainties into account. It can be used in compiled C++ code, in Python or interactively via the ROOT analysis framework. Program summaryProgram title: TRolke version 2.0 Catalogue identifier: AEFT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: MIT license No. of lines in distributed program, including test data, etc.: 3431 No. of bytes in distributed program, including test data, etc.: 21 789 Distribution format: tar.gz Programming language: ISO C++. Computer: Unix, GNU/Linux, Mac. Operating system: Linux 2.6 (Scientific Linux 4 and 5, Ubuntu 8.10), Darwin 9.0 (Mac-OS X 10.5.8). RAM:˜20 MB Classification: 14.13. External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with statistical or systematic uncertainties in signal efficiency or background. Solution method: Profile likelihood method, Analytical Running time:<10 seconds per extracted limit.
Goodness-of-Fit Tests and Nonparametric Adaptive Estimation for Spike Train Analysis
2014-01-01
When dealing with classical spike train analysis, the practitioner often performs goodness-of-fit tests to test whether the observed process is a Poisson process, for instance, or if it obeys another type of probabilistic model (Yana et al. in Biophys. J. 46(3):323–330, 1984; Brown et al. in Neural Comput. 14(2):325–346, 2002; Pouzat and Chaffiol in Technical report, http://arxiv.org/abs/arXiv:0909.2785, 2009). In doing so, there is a fundamental plug-in step, where the parameters of the supposed underlying model are estimated. The aim of this article is to show that plug-in has sometimes very undesirable effects. We propose a new method based on subsampling to deal with those plug-in issues in the case of the Kolmogorov–Smirnov test of uniformity. The method relies on the plug-in of good estimates of the underlying model that have to be consistent with a controlled rate of convergence. Some nonparametric estimates satisfying those constraints in the Poisson or in the Hawkes framework are highlighted. Moreover, they share adaptive properties that are useful from a practical point of view. We show the performance of those methods on simulated data. We also provide a complete analysis with these tools on single unit activity recorded on a monkey during a sensory-motor task. Electronic Supplementary Material The online version of this article (doi:10.1186/2190-8567-4-3) contains supplementary material. PMID:24742008
A Poisson process approximation for generalized K-5 confidence regions
NASA Technical Reports Server (NTRS)
Arsham, H.; Miller, D. R.
1982-01-01
One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.
Analog Computer Solution of the Electrodiffusion Equation for a Simple Membrane
ERIC Educational Resources Information Center
Onega, Ronald J.
1972-01-01
An analog solution was obtained for the Nenst-Planck and Poisson equations which describe the ion concentration across a simple membrane held at a potential difference. The electric field variation within the membrane was also determined. (Author/TS)
Functionally-fitted energy-preserving integrators for Poisson systems
NASA Astrophysics Data System (ADS)
Wang, Bin; Wu, Xinyuan
2018-07-01
In this paper, a new class of energy-preserving integrators is proposed and analysed for Poisson systems by using functionally-fitted technology. The integrators exactly preserve energy and have arbitrarily high order. It is shown that the proposed approach allows us to obtain the energy-preserving methods derived in [12] by Cohen and Hairer (2011) and in [1] by Brugnano et al. (2012) for Poisson systems. Furthermore, we study the sufficient conditions that ensure the existence of a unique solution and discuss the order of the new energy-preserving integrators.
Hybrid Method for Power Control Simulation of a Single Fluid Plasma Thruster
NASA Astrophysics Data System (ADS)
Jaisankar, S.; Sheshadri, T. S.
2018-05-01
Propulsive plasma flow through a cylindrical-conical diverging thruster is simulated by a power controlled hybrid method to obtain the basic flow, thermodynamic and electromagnetic variables. Simulation is based on a single fluid model with electromagnetics being described by the equations of potential Poisson, Maxwell and the Ohm's law while the compressible fluid dynamics by the Navier Stokes in cylindrical form. The proposed method solved the electromagnetics and fluid dynamics separately, both to segregate the two prominent scales for an efficient computation and for the delivery of voltage controlled rated power. The magnetic transport is solved for steady state while fluid dynamics is allowed to evolve in time along with an electromagnetic source using schemes based on generalized finite difference discretization. The multistep methodology with power control is employed for simulating fully ionized propulsive flow of argon plasma through the thruster. Numerical solution shows convergence of every part of the solver including grid stability causing the multistep hybrid method to converge for a rated power delivery. Simulation results are reasonably in agreement with the reported physics of plasma flow in the thruster thus indicating the potential utility of this hybrid computational framework, especially when single fluid approximation of plasma is relevant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devine, K.D.; Hennigan, G.L.; Hutchinson, S.A.
1999-01-01
The theoretical background for the finite element computer program, MPSalsa Version 1.5, is presented in detail. MPSalsa is designed to solve laminar or turbulent low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow (with auxiliary turbulence equations), heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solve coupled multiple Poisson or advection-diffusion-reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurringmore » in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMK3N, respectively. The code employs unstructured meshes, using the EXODUS II finite element database suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec. solver library.« less
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso
2016-01-01
ABSTRACT Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. IMPORTANCE We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. PMID:27940547
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2017-02-15
Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. Copyright © 2017 Koyama et al.
Evaluation of Shiryaev-Roberts procedure for on-line environmental radiation monitoring.
Watson, Mara M; Seliman, Ayman F; Bliznyuk, Valery N; DeVol, Timothy A
2018-04-30
Water can become contaminated as a result of a leak from a nuclear facility, such as a waste facility, or from clandestine nuclear activity. Low-level on-line radiation monitoring is needed to detect these events in real time. A Bayesian control chart method, Shiryaev-Roberts (SR) procedure, was compared with classical methods, 3-σ and cumulative sum (CUSUM), for quantifying an accumulating signal from an extractive scintillating resin flow-cell detection system. Solutions containing 0.10-5.0 Bq/L of 99 Tc, as T99cO 4 - were pumped through a flow cell packed with extractive scintillating resin used in conjunction with a Beta-RAM Model 5 HPLC detector. While T99cO 4 - accumulated on the resin, time series data were collected. Control chart methods were applied to the data using statistical algorithms developed in MATLAB. SR charts were constructed using Poisson (Poisson SR) and Gaussian (Gaussian SR) probability distributions of count data to estimate the likelihood ratio. Poisson and Gaussian SR charts required less volume of radioactive solution at a fixed concentration to exceed the control limit in most cases than 3-σ and CUSUM control charts, particularly solutions with lower activity. SR is thus the ideal control chart for low-level on-line radiation monitoring. Once the control limit was exceeded, activity concentrations were estimated from the SR control chart using the control chart slope on a semi-logarithmic plot. A linear regression fit was applied to averaged slope data for five activity concentration groupings for Poisson and Gaussian SR control charts. A correlation coefficient (R 2 ) of 0.77 for Poisson SR and 0.90 for Gaussian SR suggest this method will adequately estimate activity concentration for an unknown solution. Copyright © 2018 Elsevier Ltd. All rights reserved.
An inelastic analysis of a welded aluminum joint
NASA Astrophysics Data System (ADS)
Vaughan, R. E.
1994-09-01
Butt-weld joints are most commonly designed into pressure vessels which then become as reliable as the weakest increment in the weld chain. In practice, weld material properties are determined from tensile test specimen and provided to the stress analyst in the form of a stress versus strain diagram. Variations in properties through the thickness of the weld and along the width of the weld have been suspect but not explored because of inaccessibility and cost. The purpose of this study is to investigate analytical and computational methods used for analysis of welds. The weld specimens are analyzed using classical elastic and plastic theory to provide a basis for modeling the inelastic properties in a finite-element solution. The results of the analysis are compared to experimental data to determine the weld behavior and the accuracy of prediction methods. The weld considered in this study is a multiple-pass aluminum 2219-T87 butt weld with thickness of 1.40 in. The weld specimen is modeled using the finite-element code ABAQUS. The finite-element model is used to produce the stress-strain behavior in the elastic and plastic regimes and to determine Poisson's ratio in the plastic region. The value of Poisson's ratio in the plastic regime is then compared to experimental data. The results of the comparisons are used to explain multipass weld behavior and to make recommendations concerning the analysis and testing of welds.
An inelastic analysis of a welded aluminum joint
NASA Technical Reports Server (NTRS)
Vaughan, R. E.
1994-01-01
Butt-weld joints are most commonly designed into pressure vessels which then become as reliable as the weakest increment in the weld chain. In practice, weld material properties are determined from tensile test specimen and provided to the stress analyst in the form of a stress versus strain diagram. Variations in properties through the thickness of the weld and along the width of the weld have been suspect but not explored because of inaccessibility and cost. The purpose of this study is to investigate analytical and computational methods used for analysis of welds. The weld specimens are analyzed using classical elastic and plastic theory to provide a basis for modeling the inelastic properties in a finite-element solution. The results of the analysis are compared to experimental data to determine the weld behavior and the accuracy of prediction methods. The weld considered in this study is a multiple-pass aluminum 2219-T87 butt weld with thickness of 1.40 in. The weld specimen is modeled using the finite-element code ABAQUS. The finite-element model is used to produce the stress-strain behavior in the elastic and plastic regimes and to determine Poisson's ratio in the plastic region. The value of Poisson's ratio in the plastic regime is then compared to experimental data. The results of the comparisons are used to explain multipass weld behavior and to make recommendations concerning the analysis and testing of welds.
Piecewise exponential survival times and analysis of case-cohort data.
Li, Yan; Gail, Mitchell H; Preston, Dale L; Graubard, Barry I; Lubin, Jay H
2012-06-15
Case-cohort designs select a random sample of a cohort to be used as control with cases arising from the follow-up of the cohort. Analyses of case-cohort studies with time-varying exposures that use Cox partial likelihood methods can be computer intensive. We propose a piecewise-exponential approach where Poisson regression model parameters are estimated from a pseudolikelihood and the corresponding variances are derived by applying Taylor linearization methods that are used in survey research. The proposed approach is evaluated using Monte Carlo simulations. An illustration is provided using data from the Alpha-Tocopherol, Beta-Carotene Cancer Prevention Study of male smokers in Finland, where a case-cohort study of serum glucose level and pancreatic cancer was analyzed. Copyright © 2012 John Wiley & Sons, Ltd.
Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations
Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha
2015-04-30
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less
Bayesian Inference and Online Learning in Poisson Neuronal Networks.
Huang, Yanping; Rao, Rajesh P N
2016-08-01
Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.
A New Model that Generates Lotka's Law.
ERIC Educational Resources Information Center
Huber, John C.
2002-01-01
Develops a new model for a process that generates Lotka's Law. Topics include measuring scientific productivity through the number of publications; rate of production; career duration; randomness; Poisson distribution; computer simulations; goodness-of-fit; theoretical support for the model; and future research. (Author/LRW)
Stochastic modeling for neural spiking events based on fractional superstatistical Poisson process
NASA Astrophysics Data System (ADS)
Konno, Hidetoshi; Tamura, Yoshiyasu
2018-01-01
In neural spike counting experiments, it is known that there are two main features: (i) the counting number has a fractional power-law growth with time and (ii) the waiting time (i.e., the inter-spike-interval) distribution has a heavy tail. The method of superstatistical Poisson processes (SSPPs) is examined whether these main features are properly modeled. Although various mixed/compound Poisson processes are generated with selecting a suitable distribution of the birth-rate of spiking neurons, only the second feature (ii) can be modeled by the method of SSPPs. Namely, the first one (i) associated with the effect of long-memory cannot be modeled properly. Then, it is shown that the two main features can be modeled successfully by a class of fractional SSPP (FSSPP).
Fractional Relativistic Yamaleev Oscillator Model and Its Dynamical Behaviors
NASA Astrophysics Data System (ADS)
Luo, Shao-Kai; He, Jin-Man; Xu, Yan-Li; Zhang, Xiao-Tian
2016-07-01
In the paper we construct a new kind of fractional dynamical model, i.e. the fractional relativistic Yamaleev oscillator model, and explore its dynamical behaviors. We will find that the fractional relativistic Yamaleev oscillator model possesses Lie algebraic structure and satisfies generalized Poisson conservation law. We will also give the Poisson conserved quantities of the model. Further, the relation between conserved quantities and integral invariants of the model is studied and it is proved that, by using the Poisson conserved quantities, we can construct integral invariants of the model. Finally, the stability of the manifold of equilibrium states of the fractional relativistic Yamaleev oscillator model is studied. The paper provides a general method, i.e. fractional generalized Hamiltonian method, for constructing a family of fractional dynamical models of an actual dynamical system.
Filtrations on Springer fiber cohomology and Kostka polynomials
NASA Astrophysics Data System (ADS)
Bellamy, Gwyn; Schedler, Travis
2018-03-01
We prove a conjecture which expresses the bigraded Poisson-de Rham homology of the nilpotent cone of a semisimple Lie algebra in terms of the generalized (one-variable) Kostka polynomials, via a formula suggested by Lusztig. This allows us to construct a canonical family of filtrations on the flag variety cohomology, and hence on irreducible representations of the Weyl group, whose Hilbert series are given by the generalized Kostka polynomials. We deduce consequences for the cohomology of all Springer fibers. In particular, this computes the grading on the zeroth Poisson homology of all classical finite W-algebras, as well as the filtration on the zeroth Hochschild homology of all quantum finite W-algebras, and we generalize to all homology degrees. As a consequence, we deduce a conjecture of Proudfoot on symplectic duality, relating in type A the Poisson homology of Slodowy slices to the intersection cohomology of nilpotent orbit closures. In the last section, we give an analogue of our main theorem in the setting of mirabolic D-modules.
This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.
Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M
2012-03-01
Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.
Vlasov-Maxwell and Vlasov-Poisson equations as models of a one-dimensional electron plasma
NASA Technical Reports Server (NTRS)
Klimas, A. J.; Cooper, J.
1983-01-01
The Vlasov-Maxwell and Vlasov-Poisson systems of equations for a one-dimensional electron plasma are defined and discussed. A method for transforming a solution of one system which is periodic over a bounded or unbounded spatial interval to a similar solution of the other is constructed.
Dosimetric changes with computed tomography automatic tube-current modulation techniques.
Spampinato, Sofia; Gueli, Anna Maria; Milone, Pietro; Raffaele, Luigi Angelo
2018-04-06
The study is aimed at a verification of dose changes for a computed tomography automatic tube-current modulation (ATCM) technique. For this purpose, anthropomorphic phantom and Gafchromic ® XR-QA2 films were used. Radiochromic films were cut according to the shape of two thorax regions. The ATCM algorithm is based on noise index (NI) and three exam protocols with different NI were chosen, of which one was a reference. Results were compared with dose values displayed by the console and with Poisson statistics. The information obtained with radiochromic films has been normalized with respect to the NI reference value to compare dose percentage variations. Results showed that, on average, the information reported by the CT console and calculated values coincide with measurements. The study allowed verification of the dose information reported by the CT console for an ATCM technique. Although this evaluation represents an estimate, the method can be a starting point for further studies.
Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory
2015-01-01
Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.
NASA Astrophysics Data System (ADS)
Costiner, Sorin; Ta'asan, Shlomo
1995-07-01
Algorithms for nonlinear eigenvalue problems (EP's) often require solving self-consistently a large number of EP's. Convergence difficulties may occur if the solution is not sought in an appropriate region, if global constraints have to be satisfied, or if close or equal eigenvalues are present. Multigrid (MG) algorithms for nonlinear problems and for EP's obtained from discretizations of partial differential EP have often been shown to be more efficient than single level algorithms. This paper presents MG techniques and a MG algorithm for nonlinear Schrödinger Poisson EP's. The algorithm overcomes the above mentioned difficulties combining the following techniques: a MG simultaneous treatment of the eigenvectors and nonlinearity, and with the global constrains; MG stable subspace continuation techniques for the treatment of nonlinearity; and a MG projection coupled with backrotations for separation of solutions. These techniques keep the solutions in an appropriate region, where the algorithm converges fast, and reduce the large number of self-consistent iterations to only a few or one MG simultaneous iteration. The MG projection makes it possible to efficiently overcome difficulties related to clusters of close and equal eigenvalues. Computational examples for the nonlinear Schrödinger-Poisson EP in two and three dimensions, presenting special computational difficulties that are due to the nonlinearity and to the equal and closely clustered eigenvalues are demonstrated. For these cases, the algorithm requires O(qN) operations for the calculation of q eigenvectors of size N and for the corresponding eigenvalues. One MG simultaneous cycle per fine level was performed. The total computational cost is equivalent to only a few Gauss-Seidel relaxations per eigenvector. An asymptotic convergence rate of 0.15 per MG cycle is attained.
Modeling animal-vehicle collisions using diagonal inflated bivariate Poisson regression.
Lao, Yunteng; Wu, Yao-Jan; Corey, Jonathan; Wang, Yinhai
2011-01-01
Two types of animal-vehicle collision (AVC) data are commonly adopted for AVC-related risk analysis research: reported AVC data and carcass removal data. One issue with these two data sets is that they were found to have significant discrepancies by previous studies. In order to model these two types of data together and provide a better understanding of highway AVCs, this study adopts a diagonal inflated bivariate Poisson regression method, an inflated version of bivariate Poisson regression model, to fit the reported AVC and carcass removal data sets collected in Washington State during 2002-2006. The diagonal inflated bivariate Poisson model not only can model paired data with correlation, but also handle under- or over-dispersed data sets as well. Compared with three other types of models, double Poisson, bivariate Poisson, and zero-inflated double Poisson, the diagonal inflated bivariate Poisson model demonstrates its capability of fitting two data sets with remarkable overlapping portions resulting from the same stochastic process. Therefore, the diagonal inflated bivariate Poisson model provides researchers a new approach to investigating AVCs from a different perspective involving the three distribution parameters (λ(1), λ(2) and λ(3)). The modeling results show the impacts of traffic elements, geometric design and geographic characteristics on the occurrences of both reported AVC and carcass removal data. It is found that the increase of some associated factors, such as speed limit, annual average daily traffic, and shoulder width, will increase the numbers of reported AVCs and carcass removals. Conversely, the presence of some geometric factors, such as rolling and mountainous terrain, will decrease the number of reported AVCs. Published by Elsevier Ltd.
Poisson image reconstruction with Hessian Schatten-norm regularization.
Lefkimmiatis, Stamatios; Unser, Michael
2013-11-01
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.
Poisson Approximation-Based Score Test for Detecting Association of Rare Variants.
Fang, Hongyan; Zhang, Hong; Yang, Yaning
2016-07-01
Genome-wide association study (GWAS) has achieved great success in identifying genetic variants, but the nature of GWAS has determined its inherent limitations. Under the common disease rare variants (CDRV) hypothesis, the traditional association analysis methods commonly used in GWAS for common variants do not have enough power for detecting rare variants with a limited sample size. As a solution to this problem, pooling rare variants by their functions provides an efficient way for identifying susceptible genes. Rare variant typically have low frequencies of minor alleles, and the distribution of the total number of minor alleles of the rare variants can be approximated by a Poisson distribution. Based on this fact, we propose a new test method, the Poisson Approximation-based Score Test (PAST), for association analysis of rare variants. Two testing methods, namely, ePAST and mPAST, are proposed based on different strategies of pooling rare variants. Simulation results and application to the CRESCENDO cohort data show that our methods are more powerful than the existing methods. © 2016 John Wiley & Sons Ltd/University College London.
A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data
Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence
2013-01-01
Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less
Negative Binomial Process Count and Mixture Modeling.
Zhou, Mingyuan; Carin, Lawrence
2015-02-01
The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.
Poisson's ratio over two centuries: challenging hypotheses
Greaves, G. Neville
2013-01-01
This article explores Poisson's ratio, starting with the controversy concerning its magnitude and uniqueness in the context of the molecular and continuum hypotheses competing in the development of elasticity theory in the nineteenth century, moving on to its place in the development of materials science and engineering in the twentieth century, and concluding with its recent re-emergence as a universal metric for the mechanical performance of materials on any length scale. During these episodes France lost its scientific pre-eminence as paradigms switched from mathematical to observational, and accurate experiments became the prerequisite for scientific advance. The emergence of the engineering of metals followed, and subsequently the invention of composites—both somewhat separated from the discovery of quantum mechanics and crystallography, and illustrating the bifurcation of technology and science. Nowadays disciplines are reconnecting in the face of new scientific demands. During the past two centuries, though, the shape versus volume concept embedded in Poisson's ratio has remained invariant, but its application has exploded from its origins in describing the elastic response of solids and liquids, into areas such as materials with negative Poisson's ratio, brittleness, glass formation, and a re-evaluation of traditional materials. Moreover, the two contentious hypotheses have been reconciled in their complementarity within the hierarchical structure of materials and through computational modelling. PMID:24687094
NASA Astrophysics Data System (ADS)
Che Awang, Aznida; Azah Samat, Nor
2017-09-01
Leptospirosis is a disease caused by the infection of pathogenic species from the genus of Leptospira. Human can be infected by the leptospirosis from direct or indirect exposure to the urine of infected animals. The excretion of urine from the animal host that carries pathogenic Leptospira causes the soil or water to be contaminated. Therefore, people can become infected when they are exposed to contaminated soil and water by cut on the skin as well as open wound. It also can enter the human body by mucous membrane such nose, eyes and mouth, for example by splashing contaminated water or urine into the eyes or swallowing contaminated water or food. Currently, there is no vaccine available for the prevention or treatment of leptospirosis disease but this disease can be treated if it is diagnosed early to avoid any complication. The disease risk mapping is important in a way to control and prevention of disease. Using a good choice of statistical model will produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for leptospirosis disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and Poisson-gamma model. This paper begins by providing a review of the SMR method and Poisson-gamma model, which we then applied to leptospirosis data of Kelantan, Malaysia. Both results are displayed and compared using graph, tables and maps. The result shows that the second method Poisson-gamma model produces better relative risk estimates compared to the SMR method. This is because the Poisson-gamma model can overcome the drawback of SMR where the relative risk will become zero when there is no observed leptospirosis case in certain regions. However, the Poisson-gamma model also faced problems where the covariate adjustment for this model is difficult and no possibility for allowing spatial correlation between risks in neighbouring areas. The problems of this model have motivated many researchers to introduce other alternative methods for estimating the risk.
Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan
2011-11-01
To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P < 0.0001) based on testing by the Lagrangemultiplier. Therefore, the over-dispersion dispersed data using a modified Poisson regression and negative binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.
Poisson denoising on the sphere: application to the Fermi gamma ray space telescope
NASA Astrophysics Data System (ADS)
Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.
2010-07-01
The Large Area Telescope (LAT), the main instrument of the Fermi gamma-ray Space telescope, detects high energy gamma rays with energies from 20 MeV to more than 300 GeV. The two main scientific objectives, the study of the Milky Way diffuse background and the detection of point sources, are complicated by the lack of photons. That is why we need a powerful Poisson noise removal method on the sphere which is efficient on low count Poisson data. This paper presents a new multiscale decomposition on the sphere for data with Poisson noise, called multi-scale variance stabilizing transform on the sphere (MS-VSTS). This method is based on a variance stabilizing transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has a quasi constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. MS-VSTS consists of decomposing the data into a sparse multi-scale dictionary like wavelets or curvelets, and then applying a VST on the coefficients in order to get almost Gaussian stabilized coefficients. In this work, we use the isotropic undecimated wavelet transform (IUWT) and the curvelet transform as spherical multi-scale transforms. Then, binary hypothesis testing is carried out to detect significant coefficients, and the denoised image is reconstructed with an iterative algorithm based on hybrid steepest descent (HSD). To detect point sources, we have to extract the Galactic diffuse background: an extension of the method to background separation is then proposed. In contrary, to study the Milky Way diffuse background, we remove point sources with a binary mask. The gaps have to be interpolated: an extension to inpainting is then proposed. The method, applied on simulated Fermi LAT data, proves to be adaptive, fast and easy to implement.
The Euler-Poisson-Darboux equation for relativists
NASA Astrophysics Data System (ADS)
Stewart, John M.
2009-09-01
The Euler-Poisson-Darboux (EPD) equation is the simplest linear hyperbolic equation in two independent variables whose coefficients exhibit singularities, and as such must be of interest as a paradigm to relativists. Sadly it receives scant treatment in the textbooks. The first half of this review is didactic in nature. It discusses in the simplest terms possible the nature of solutions of the EPD equation for the timelike and spacelike singularity cases. Also covered is the Riemann representation of solutions of the characteristic initial value problem, which is hard to find in the literature. The second half examines a few of the possible applications, ranging from explicit computation of the leading terms in the far-field backscatter from predominantly outgoing radiation in a Schwarzschild space-time, to computing explicitly the leading terms in the matter-induced singularities in plane symmetric space-times. There are of course many other applications and the aim of this article is to encourage relativists to investigate this underrated paradigm.
NASA Technical Reports Server (NTRS)
Sohn, J. L.; Heinrich, J. C.
1990-01-01
The calculation of pressures when the penalty-function approximation is used in finite-element solutions of laminar incompressible flows is addressed. A Poisson equation for the pressure is formulated that involves third derivatives of the velocity field. The second derivatives appearing in the weak formulation of the Poisson equation are calculated from the C0 velocity approximation using a least-squares method. The present scheme is shown to be efficient, free of spurious oscillations, and accurate. Examples of applications are given and compared with results obtained using mixed formulations.
Lord, Dominique
2006-07-01
There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.
Further Improvement in 3DGRAPE
NASA Technical Reports Server (NTRS)
Alter, Stephen
2004-01-01
3DGRAPE/AL:V2 denotes version 2 of the Three-Dimensional Grids About Anything by Poisson's Equation with Upgrades from Ames and Langley computer program. The preceding version, 3DGRAPE/AL, was described in Improved 3DGRAPE (ARC-14069) NASA Tech Briefs, Vol. 21, No. 5 (May 1997), page 66. These programs are so named because they generate volume grids by iteratively solving Poisson's Equation in three dimensions. The grids generated by the various versions of 3DGRAPE have been used in computational fluid dynamics (CFD). The main novel feature of 3DGRAPE/AL:V2 is the incorporation of an optional scheme in which anisotropic Lagrange-based trans-finite interpolation (ALBTFI) is coupled with exponential decay functions to compute and blend interior source terms. In the input to 3DGRAPE/AL:V2 the user can specify whether or not to invoke ALBTFI in combination with exponential-decay controls, angles, and cell size for controlling the character of grid lines. Of the known programs that solve elliptic partial differential equations for generating grids, 3DGRAPE/AL:V2 is the only code that offers a combination of speed and versatility with most options for controlling the densities and other characteristics of grids for CFD.
Przybytek, Michal; Helgaker, Trygve
2013-08-07
We analyze the accuracy of the Coulomb energy calculated using the Gaussian-and-finite-element-Coulomb (GFC) method. In this approach, the electrostatic potential associated with the molecular electronic density is obtained by solving the Poisson equation and then used to calculate matrix elements of the Coulomb operator. The molecular electrostatic potential is expanded in a mixed Gaussian-finite-element (GF) basis set consisting of Gaussian functions of s symmetry centered on the nuclei (with exponents obtained from a full optimization of the atomic potentials generated by the atomic densities from symmetry-averaged restricted open-shell Hartree-Fock theory) and shape functions defined on uniform finite elements. The quality of the GF basis is controlled by means of a small set of parameters; for a given width of the finite elements d, the highest accuracy is achieved at smallest computational cost when tricubic (n = 3) elements are used in combination with two (γ(H) = 2) and eight (γ(1st) = 8) Gaussians on hydrogen and first-row atoms, respectively, with exponents greater than a given threshold (αmin (G)=0.5). The error in the calculated Coulomb energy divided by the number of atoms in the system depends on the system type but is independent of the system size or the orbital basis set, vanishing approximately like d(4) with decreasing d. If the boundary conditions for the Poisson equation are calculated in an approximate way, the GFC method may lose its variational character when the finite elements are too small; with larger elements, it is less sensitive to inaccuracies in the boundary values. As it is possible to obtain accurate boundary conditions in linear time, the overall scaling of the GFC method for large systems is governed by another computational step-namely, the generation of the three-center overlap integrals with three Gaussian orbitals. The most unfavorable (nearly quadratic) scaling is observed for compact, truly three-dimensional systems; however, this scaling can be reduced to linear by introducing more effective techniques for recognizing significant three-center overlap distributions.
Evaluation of Shiryaev-Roberts Procedure for On-line Environmental Radiation Monitoring
NASA Astrophysics Data System (ADS)
Watson, Mara Mae
An on-line radiation monitoring system that simultaneously concentrates and detects radioactivity is needed to detect an accidental leakage from a nuclear waste disposal facility or clandestine nuclear activity. Previous studies have shown that classical control chart methods can be applied to on-line radiation monitoring data to quickly detect these events as they occur; however, Bayesian control chart methods were not included in these studies. This work will evaluate the performance of a Bayesian control chart method, the Shiryaev-Roberts (SR) procedure, compared to classical control chart methods, Shewhart 3-sigma and cumulative sum (CUSUM), for use in on-line radiation monitoring of 99Tc in water using extractive scintillating resin. Measurements were collected by pumping solutions containing 0.1-5 Bq/L of 99Tc, as 99T cO4-, through a flow cell packed with extractive scintillating resin coupled to a Beta-RAM Model 5 HPLC detector. While 99T cO4- accumulated on the resin, simultaneous measurements were acquired in 10-s intervals and then re-binned to 100-s intervals. The Bayesian statistical method, Shiryaev-Roberts procedure, and classical control chart methods, Shewhart 3-sigma and cumulative sum (CUSUM), were applied to the data using statistical algorithms developed in MATLAB RTM. Two SR control charts were constructed using Poisson distributions and Gaussian distributions to estimate the likelihood ratio, and are referred to as Poisson SR and Gaussian SR to indicate the distribution used to calculate the statistic. The Poisson and Gaussian SR methods required as little as 28.9 mL less solution at 5 Bq/L and as much as 170 mL less solution at 0.5 Bq/L to exceed the control limit than the Shewhart 3-sigma method. The Poisson SR method needed as little as 6.20 mL less solution at 5 Bq/L and up to 125 mL less solution at 0.5 Bq/L to exceed the control limit than the CUSUM method. The Gaussian SR and CUSUM method required comparable solution volumes for test solutions containing at least 1.5 Bq/L of 99T c. For activity concentrations less than 1.5 Bq/L, the Gaussian SR method required as much as 40.8 mL less solution at 0.5 Bq/L to exceed the control limit than the CUSUM method. Both SR methods were able to consistently detect test solutions containing 0.1 Bq/L, unlike the Shewhart 3-sigma and CUSUM methods. Although the Poisson SR method required as much as 178 mL less solution to exceed the control limit than the Gaussian SR method, the Gaussian SR false positive of 0% was much lower than the Poisson SR false positive rate of 1.14%. A lower false positive rate made it easier to differentiate between a false positive and an increase in mean count rate caused by activity accumulating on the resin. The SR procedure is thus the ideal tool for low-level on-line radiation monitoring using extractive scintillating resin, because it needed less volume in most cases to detect an upward shift in the mean count rate than the Shewhart 3-sigma and CUSUM methods and consistently detected lower activity concentrations. The desired results for the monitoring scheme, however, need to be considered prior to choosing between the Poisson and Gaussian distribution to estimate the likelihood ratio, because each was advantageous under different circumstances. Once the control limit was exceeded, activity concentrations were estimated from the SR control chart using the slope of the control chart on a semi-logarithmic plot. Five of nine test solutions for the Poisson SR control chart produced concentration estimates within 30% of the actual value, but the worst case was 263.2% different than the actual value. The estimations for the Gaussian SR control chart were much more precise, with six of eight solutions producing estimates within 30%. Although the activity concentrations estimations were only mediocre for the Poisson SR control chart and satisfactory for the Gaussian SR control chart, these results demonstrate that a relationship exists between activity concentration and the SR control chart magnitude that can be exploited to determine the activity concentration from the SR control chart. More complex methods should be investigated to improve activity concentration estimations from the SR control charts.
Gustafsson, Leif; Sternad, Mikael
2007-10-01
Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.
NASA Astrophysics Data System (ADS)
Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, J. Andrew
2010-06-01
A Fortran program package is introduced for rapid evaluation of the electrostatic potentials and forces in biomolecular systems modeled by the linearized Poisson-Boltzmann equation. The numerical solver utilizes a well-conditioned boundary integral equation (BIE) formulation, a node-patch discretization scheme, a Krylov subspace iterative solver package with reverse communication protocols, and an adaptive new version of fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related libraries and utility tools are available at http://lsec.cc.ac.cn/~lubz/afmpb.html and a mirror site at http://mccammon.ucsd.edu/. This paper is a brief summary of the program: the algorithms, the implementation and the usage. Program summaryProgram title: AFMPB: Adaptive fast multipole Poisson-Boltzmann solver Catalogue identifier: AEGB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL 2.0 No. of lines in distributed program, including test data, etc.: 453 649 No. of bytes in distributed program, including test data, etc.: 8 764 754 Distribution format: tar.gz Programming language: Fortran Computer: Any Operating system: Any RAM: Depends on the size of the discretized biomolecular system Classification: 3 External routines: Pre- and post-processing tools are required for generating the boundary elements and for visualization. Users can use MSMS ( http://www.scripps.edu/~sanner/html/msms_home.html) for pre-processing, and VMD ( http://www.ks.uiuc.edu/Research/vmd/) for visualization. Sub-programs included: An iterative Krylov subspace solvers package from SPARSKIT by Yousef Saad ( http://www-users.cs.umn.edu/~saad/software/SPARSKIT/sparskit.html), and the fast multipole methods subroutines from FMMSuite ( http://www.fastmultipole.org/). Nature of problem: Numerical solution of the linearized Poisson-Boltzmann equation that describes electrostatic interactions of molecular systems in ionic solutions. Solution method: A novel node-patch scheme is used to discretize the well-conditioned boundary integral equation formulation of the linearized Poisson-Boltzmann equation. Various Krylov subspace solvers can be subsequently applied to solve the resulting linear system, with a bounded number of iterations independent of the number of discretized unknowns. The matrix-vector multiplication at each iteration is accelerated by the adaptive new versions of fast multipole methods. The AFMPB solver requires other stand-alone pre-processing tools for boundary mesh generation, post-processing tools for data analysis and visualization, and can be conveniently coupled with different time stepping methods for dynamics simulation. Restrictions: Only three or six significant digits options are provided in this version. Unusual features: Most of the codes are in Fortran77 style. Memory allocation functions from Fortran90 and above are used in a few subroutines. Additional comments: The current version of the codes is designed and written for single core/processor desktop machines. Check http://lsec.cc.ac.cn/~lubz/afmpb.html and http://mccammon.ucsd.edu/ for updates and changes. Running time: The running time varies with the number of discretized elements ( N) in the system and their distributions. In most cases, it scales linearly as a function of N.
High order discretization techniques for real-space ab initio simulations
NASA Astrophysics Data System (ADS)
Anderson, Christopher R.
2018-03-01
In this paper, we present discretization techniques to address numerical problems that arise when constructing ab initio approximations that use real-space computational grids. We present techniques to accommodate the singular nature of idealized nuclear and idealized electronic potentials, and we demonstrate the utility of using high order accurate grid based approximations to Poisson's equation in unbounded domains. To demonstrate the accuracy of these techniques, we present results for a Full Configuration Interaction computation of the dissociation of H2 using a computed, configuration dependent, orbital basis set.
Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers.
Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew
2014-12-26
Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation.
PB-AM: An open-source, fully analytical linear poisson-boltzmann solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felberg, Lisa E.; Brookes, David H.; Yap, Eng-Hui
2016-11-02
We present the open source distributed software package Poisson-Boltzmann Analytical Method (PB-AM), a fully analytical solution to the linearized Poisson Boltzmann equation. The PB-AM software package includes the generation of outputs files appropriate for visualization using VMD, a Brownian dynamics scheme that uses periodic boundary conditions to simulate dynamics, the ability to specify docking criteria, and offers two different kinetics schemes to evaluate biomolecular association rate constants. Given that PB-AM defines mutual polarization completely and accurately, it can be refactored as a many-body expansion to explore 2- and 3-body polarization. Additionally, the software has been integrated into the Adaptive Poisson-Boltzmannmore » Solver (APBS) software package to make it more accessible to a larger group of scientists, educators and students that are more familiar with the APBS framework.« less
Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.
Renner, Ian W; Warton, David I
2013-03-01
Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.
Three-dimensional zonal grids about arbitrary shapes by Poisson's equation
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.
1988-01-01
A method for generating 3-D finite difference grids about or within arbitrary shapes is presented. The 3-D Poisson equations are solved numerically, with values for the inhomogeneous terms found automatically by the algorithm. Those inhomogeneous terms have the effect near boundaries of reducing cell skewness and imposing arbitrary cell height. The method allows the region of interest to be divided into zones (blocks), allowing the method to be applicable to almost any physical domain. A FORTRAN program called 3DGRAPE has been written to implement the algorithm. Lastly, a method for redistributing grid points along lines normal to boundaries will be described.
NASA Astrophysics Data System (ADS)
AllahTavakoli, Y.; Safari, A.; Ardalan, A.; Bahroudi, A.
2015-12-01
The current research provides a method for tracking near-surface mass-density anomalies via using only land-based gravity data, which is based on a special version of Poisson's Partial Differential Equation (PDE) of the gravitational field at Earth's surface. The research demonstrates how the Poisson's PDE can provide us with a capability to extract the near-surface mass-density anomalies from land-based gravity data. Herein, this version of the Poisson's PDE is mathematically introduced to the Earth's surface and then it is used to develop the new method for approximating the mass-density via derivatives of the Earth's gravitational field (i.e. via the gradient tensor). Herein, the author believes that the PDE can give us new knowledge about the behavior of the Earth's gravitational field at the Earth's surface which can be so useful for developing new methods of Earth's mass-density determination. In a case study, the proposed method is applied to a set of gravity stations located in the south of Iran. The results were numerically validated via certain knowledge about the geological structures in the area of the case study. Also, the method was compared with two standard methods of mass-density determination. All the numerical experiments show that the proposed approach is well-suited for tracking near-surface mass-density anomalies via using only the gravity data. Finally, the approach is also applied to some petroleum exploration studies of salt diapirs in the south of Iran.
Multiscale hidden Markov models for photon-limited imaging
NASA Astrophysics Data System (ADS)
Nowak, Robert D.
1999-06-01
Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.
Electrostatics of electron-hole interactions in van der Waals heterostructures
NASA Astrophysics Data System (ADS)
Cavalcante, L. S. R.; Chaves, A.; Van Duppen, B.; Peeters, F. M.; Reichman, D. R.
2018-03-01
The role of dielectric screening of electron-hole interaction in van der Waals heterostructures is theoretically investigated. A comparison between models available in the literature for describing these interactions is made and the limitations of these approaches are discussed. A simple numerical solution of Poisson's equation for a stack of dielectric slabs based on a transfer matrix method is developed, enabling the calculation of the electron-hole interaction potential at very low computational cost and with reasonable accuracy. Using different potential models, direct and indirect exciton binding energies in these systems are calculated within Wannier-Mott theory, and a comparison of theoretical results with recent experiments on excitons in two-dimensional materials is discussed.
Wheeled Pro(p)file of Batalin-Vilkovisky Formalism
NASA Astrophysics Data System (ADS)
Merkulov, S. A.
2010-05-01
Using a technique of wheeled props we establish a correspondence between the homotopy theory of unimodular Lie 1-bialgebras and the famous Batalin-Vilkovisky formalism. Solutions of the so-called quantum master equation satisfying certain boundary conditions are proven to be in 1-1 correspondence with representations of a wheeled dg prop which, on the one hand, is isomorphic to the cobar construction of the prop of unimodular Lie 1-bialgebras and, on the other hand, is quasi-isomorphic to the dg wheeled prop of unimodular Poisson structures. These results allow us to apply properadic methods for computing formulae for a homotopy transfer of a unimodular Lie 1-bialgebra structure on an arbitrary complex to the associated quantum master function on its cohomology. It is proven that in the category of quantum BV manifolds associated with the homotopy theory of unimodular Lie 1-bialgebras quasi-isomorphisms are equivalence relations. It is shown that Losev-Mnev’s BF theory for unimodular Lie algebras can be naturally extended to the case of unimodular Lie 1-bialgebras (and, eventually, to the case of unimodular Poisson structures). Using a finite-dimensional version of the Batalin-Vilkovisky quantization formalism it is rigorously proven that the Feynman integrals computing the effective action of this new BF theory describe precisely homotopy transfer formulae obtained within the wheeled properadic approach to the quantum master equation. Quantum corrections (which are present in our BF model to all orders of the Planck constant) correspond precisely to what are often called “higher Massey products” in the homological algebra.
Lattice dynamic properties of Rh2XAl (X=Fe and Y) alloys
NASA Astrophysics Data System (ADS)
Al, Selgin; Arikan, Nihat; Demir, Süleyman; Iyigör, Ahmet
2018-02-01
The electronic band structure, elastic and vibrational spectra of Rh2FeAl and Rh2YAl alloys were computed in detail by employing an ab-initio pseudopotential method and a linear-response technique based on the density-functional theory (DFT) scheme within a generalized gradient approximation (GGA). Computed lattice constants, bulk modulus and elastic constants were compared. Rh2YAl exhibited higher ability to resist volume change than Rh2FeAl. The elastic constants, shear modulus, Young modulus, Poisson's ratio, B/G ratio electronic band structure, total and partial density of states, and total magnetic moment of alloys were also presented. Rh2FeAl showed spin up and spin down states whereas Rh2YAl showed none due to being non-magnetic. The calculated total densities of states for both materials suggest that both alloys are metallic in nature. Full phonon spectra of Rh2FeAl and Rh2YA1 alloys in the L21 phase were collected using the ab-initio linear response method. The obtained phonon frequencies were in the positive region indicating that both alloys are dynamically stable.
DETECTING UNSPECIFIED STRUCTURE IN LOW-COUNT IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Nathan M.; Dyk, David A. van; Kashyap, Vinay L.
Unexpected structure in images of astronomical sources often presents itself upon visual inspection of the image, but such apparent structure may either correspond to true features in the source or be due to noise in the data. This paper presents a method for testing whether inferred structure in an image with Poisson noise represents a significant departure from a baseline (null) model of the image. To infer image structure, we conduct a Bayesian analysis of a full model that uses a multiscale component to allow flexible departures from the posited null model. As a test statistic, we use a tailmore » probability of the posterior distribution under the full model. This choice of test statistic allows us to estimate a computationally efficient upper bound on a p-value that enables us to draw strong conclusions even when there are limited computational resources that can be devoted to simulations under the null model. We demonstrate the statistical performance of our method on simulated images. Applying our method to an X-ray image of the quasar 0730+257, we find significant evidence against the null model of a single point source and uniform background, lending support to the claim of an X-ray jet.« less
The use of wavelet filters for reducing noise in posterior fossa Computed Tomography images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pita-Machado, Reinado; Perez-Diaz, Marlen, E-mail: mperez@uclv.edu.cu; Lorenzo-Ginori, Juan V., E-mail: mperez@uclv.edu.cu
Wavelet transform based de-noising like wavelet shrinkage, gives the good results in CT. This procedure affects very little the spatial resolution. Some applications are reconstruction methods, while others are a posteriori de-noising methods. De-noising after reconstruction is very difficult because the noise is non-stationary and has unknown distribution. Therefore, methods which work on the sinogram-space don’t have this problem, because they always work over a known noise distribution at this point. On the other hand, the posterior fossa in a head CT is a very complex region for physicians, because it is commonly affected by artifacts and noise which aremore » not eliminated during the reconstruction procedure. This can leads to some false positive evaluations. The purpose of our present work is to compare different wavelet shrinkage de-noising filters to reduce noise, particularly in images of the posterior fossa within CT scans in the sinogram-space. This work describes an experimental search for the best wavelets, to reduce Poisson noise in Computed Tomography (CT) scans. Results showed that de-noising with wavelet filters improved the quality of posterior fossa region in terms of an increased CNR, without noticeable structural distortions.« less
NASA Astrophysics Data System (ADS)
Liu, Hailiang; Wang, Zhongming
2017-01-01
We design an arbitrary-order free energy satisfying discontinuous Galerkin (DG) method for solving time-dependent Poisson-Nernst-Planck systems. Both the semi-discrete and fully discrete DG methods are shown to satisfy the corresponding discrete free energy dissipation law for positive numerical solutions. Positivity of numerical solutions is enforced by an accuracy-preserving limiter in reference to positive cell averages. Numerical examples are presented to demonstrate the high resolution of the numerical algorithm and to illustrate the proven properties of mass conservation, free energy dissipation, as well as the preservation of steady states.
Direct Coupling Method for Time-Accurate Solution of Incompressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Soh, Woo Y.
1992-01-01
A noniterative finite difference numerical method is presented for the solution of the incompressible Navier-Stokes equations with second order accuracy in time and space. Explicit treatment of convection and diffusion terms and implicit treatment of the pressure gradient give a single pressure Poisson equation when the discretized momentum and continuity equations are combined. A pressure boundary condition is not needed on solid boundaries in the staggered mesh system. The solution of the pressure Poisson equation is obtained directly by Gaussian elimination. This method is tested on flow problems in a driven cavity and a curved duct.
Sparse Poisson noisy image deblurring.
Carlavan, Mikael; Blanc-Féraud, Laure
2012-04-01
Deblurring noisy Poisson images has recently been a subject of an increasing amount of works in many areas such as astronomy and biological imaging. In this paper, we focus on confocal microscopy, which is a very popular technique for 3-D imaging of biological living specimens that gives images with a very good resolution (several hundreds of nanometers), although degraded by both blur and Poisson noise. Deconvolution methods have been proposed to reduce these degradations, and in this paper, we focus on techniques that promote the introduction of an explicit prior on the solution. One difficulty of these techniques is to set the value of the parameter, which weights the tradeoff between the data term and the regularizing term. Only few works have been devoted to the research of an automatic selection of this regularizing parameter when considering Poisson noise; therefore, it is often set manually such that it gives the best visual results. We present here two recent methods to estimate this regularizing parameter, and we first propose an improvement of these estimators, which takes advantage of confocal images. Following these estimators, we secondly propose to express the problem of the deconvolution of Poisson noisy images as the minimization of a new constrained problem. The proposed constrained formulation is well suited to this application domain since it is directly expressed using the antilog likelihood of the Poisson distribution and therefore does not require any approximation. We show how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and we present results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, we specially focus on the dual-tree complex wavelet transform and on the dictionary composed of curvelets and an undecimated wavelet transform.
Normalized stiffness ratios for mechanical characterization of isotropic acoustic foams.
Sahraoui, Sohbi; Brouard, Bruno; Benyahia, Lazhar; Parmentier, Damien; Geslain, Alan
2013-12-01
This paper presents a method for the mechanical characterization of isotropic foams at low frequency. The objective of this study is to determine the Young's modulus, the Poisson's ratio, and the loss factor of commercially available foam plates. The method is applied on porous samples having square and circular sections. The main idea of this work is to perform quasi-static compression tests of a single foam sample followed by two juxtaposed samples having the same dimensions. The load and displacement measurements lead to a direct extraction of the elastic constants by means of normalized stiffness and normalized stiffness ratio which depend on Poisson's ratio and shape factor. The normalized stiffness is calculated by the finite element method for different Poisson ratios. The no-slip boundary conditions imposed by the loading rigid plates create interfaces with a complex strain distribution. Beforehand, compression tests were performed by means of a standard tensile machine in order to determine the appropriate pre-compression rate for quasi-static tests.
Martins, Silvia A; Sousa, Sergio F
2013-06-05
The determination of differences in solvation free energies between related drug molecules remains an important challenge in computational drug optimization, when fast and accurate calculation of differences in binding free energy are required. In this study, we have evaluated the performance of five commonly used polarized continuum model (PCM) methodologies in the determination of solvation free energies for 53 typical alcohol and alkane small molecules. In addition, the performance of these PCM methods, of a thermodynamic integration (TI) protocol and of the Poisson-Boltzmann (PB) and generalized Born (GB) methods, were tested in the determination of solvation free energies changes for 28 common alkane-alcohol transformations, by the substitution of an hydrogen atom for a hydroxyl substituent. The results show that the solvation model D (SMD) performs better among the PCM-based approaches in estimating solvation free energies for alcohol molecules, and solvation free energy changes for alkane-alcohol transformations, with an average error below 1 kcal/mol for both quantities. However, for the determination of solvation free energy changes on alkane-alcohol transformation, PB and TI yielded better results. TI was particularly accurate in the treatment of hydroxyl groups additions to aromatic rings (0.53 kcal/mol), a common transformation when optimizing drug-binding in computer-aided drug design. Copyright © 2013 Wiley Periodicals, Inc.
2012-01-01
Background The Poisson-Boltzmann (PB) equation and its linear approximation have been widely used to describe biomolecular electrostatics. Generalized Born (GB) models offer a convenient computational approximation for the more fundamental approach based on the Poisson-Boltzmann equation, and allows estimation of pairwise contributions to electrostatic effects in the molecular context. Results We have implemented in a single program most common analyses of the electrostatic properties of proteins. The program first computes generalized Born radii, via a surface integral and then it uses generalized Born radii (using a finite radius test particle) to perform electrostic analyses. In particular the ouput of the program entails, depending on user's requirement: 1) the generalized Born radius of each atom; 2) the electrostatic solvation free energy; 3) the electrostatic forces on each atom (currently in a dvelopmental stage); 4) the pH-dependent properties (total charge and pH-dependent free energy of folding in the pH range -2 to 18; 5) the pKa of all ionizable groups; 6) the electrostatic potential at the surface of the molecule; 7) the electrostatic potential in a volume surrounding the molecule; Conclusions Although at the expense of limited flexibility the program provides most common analyses with requirement of a single input file in PQR format. The results obtained are comparable to those obtained using state-of-the-art Poisson-Boltzmann solvers. A Linux executable with example input and output files is provided as supplementary material. PMID:22536964
Species abundance in a forest community in South China: A case of poisson lognormal distribution
Yin, Z.-Y.; Ren, H.; Zhang, Q.-M.; Peng, S.-L.; Guo, Q.-F.; Zhou, G.-Y.
2005-01-01
Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m??20 m, 5 m??5 m, and 1 m??1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal; (ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (?? and ??) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the ?? and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/?? should be an alternative measure of diversity.
Elliptic generation of composite three-dimensional grids about realistic aircraft
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1986-01-01
An elliptic method for generating composite grids about realistic aircraft is presented. A body-conforming grid is first generated about the entire aircraft by the solution of Poisson's differential equation. This grid has relatively coarse spacing, and it covers the entire physical domain. At boundary surfaces, cell size is controlled and cell skewness is nearly eliminated by inhomogeneous terms, which are found automatically by the program. Certain regions of the grid in which high gradients are expected, and which map into rectangular solids in the computational domain, are then designated for zonal refinement. Spacing in the zonal grids is reduced by adding points with a simple, algebraic scheme. Details of the grid generation method are presented along with results of the present application, a wing-body configuration based on the F-16 fighter aircraft.
Effective implementation of wavelet Galerkin method
NASA Astrophysics Data System (ADS)
Finěk, Václav; Šimunková, Martina
2012-11-01
It was proved by W. Dahmen et al. that an adaptive wavelet scheme is asymptotically optimal for a wide class of elliptic equations. This scheme approximates the solution u by a linear combination of N wavelets and a benchmark for its performance is the best N-term approximation, which is obtained by retaining the N largest wavelet coefficients of the unknown solution. Moreover, the number of arithmetic operations needed to compute the approximate solution is proportional to N. The most time consuming part of this scheme is the approximate matrix-vector multiplication. In this contribution, we will introduce our implementation of wavelet Galerkin method for Poisson equation -Δu = f on hypercube with homogeneous Dirichlet boundary conditions. In our implementation, we identified nonzero elements of stiffness matrix corresponding to the above problem and we perform matrix-vector multiplication only with these nonzero elements.
Initial evaluation of discrete orthogonal basis reconstruction of ECT images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, E.B.; Donohue, K.D.
1996-12-31
Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less
Schrödinger-Poisson-Vlasov-Poisson correspondence
NASA Astrophysics Data System (ADS)
Mocz, Philip; Lancaster, Lachlan; Fialkov, Anastasia; Becerra, Fernando; Chavanis, Pierre-Henri
2018-04-01
The Schrödinger-Poisson equations describe the behavior of a superfluid Bose-Einstein condensate under self-gravity with a 3D wave function. As ℏ/m →0 , m being the boson mass, the equations have been postulated to approximate the collisionless Vlasov-Poisson equations also known as the collisionless Boltzmann-Poisson equations. The latter describe collisionless matter with a 6D classical distribution function. We investigate the nature of this correspondence with a suite of numerical test problems in 1D, 2D, and 3D along with analytic treatments when possible. We demonstrate that, while the density field of the superfluid always shows order unity oscillations as ℏ/m →0 due to interference and the uncertainty principle, the potential field converges to the classical answer as (ℏ/m )2. Thus, any dynamics coupled to the superfluid potential is expected to recover the classical collisionless limit as ℏ/m →0 . The quantum superfluid is able to capture rich phenomena such as multiple phase-sheets, shell-crossings, and warm distributions. Additionally, the quantum pressure tensor acts as a regularizer of caustics and singularities in classical solutions. This suggests the exciting prospect of using the Schrödinger-Poisson equations as a low-memory method for approximating the high-dimensional evolution of the Vlasov-Poisson equations. As a particular example we consider dark matter composed of ultralight axions, which in the classical limit (ℏ/m →0 ) is expected to manifest itself as collisionless cold dark matter.
A Kolmogorov-Smirnov test for the molecular clock based on Bayesian ensembles of phylogenies
Antoneli, Fernando; Passos, Fernando M.; Lopes, Luciano R.
2018-01-01
Divergence date estimates are central to understand evolutionary processes and depend, in the case of molecular phylogenies, on tests of molecular clocks. Here we propose two non-parametric tests of strict and relaxed molecular clocks built upon a framework that uses the empirical cumulative distribution (ECD) of branch lengths obtained from an ensemble of Bayesian trees and well known non-parametric (one-sample and two-sample) Kolmogorov-Smirnov (KS) goodness-of-fit test. In the strict clock case, the method consists in using the one-sample Kolmogorov-Smirnov (KS) test to directly test if the phylogeny is clock-like, in other words, if it follows a Poisson law. The ECD is computed from the discretized branch lengths and the parameter λ of the expected Poisson distribution is calculated as the average branch length over the ensemble of trees. To compensate for the auto-correlation in the ensemble of trees and pseudo-replication we take advantage of thinning and effective sample size, two features provided by Bayesian inference MCMC samplers. Finally, it is observed that tree topologies with very long or very short branches lead to Poisson mixtures and in this case we propose the use of the two-sample KS test with samples from two continuous branch length distributions, one obtained from an ensemble of clock-constrained trees and the other from an ensemble of unconstrained trees. Moreover, in this second form the test can also be applied to test for relaxed clock models. The use of a statistically equivalent ensemble of phylogenies to obtain the branch lengths ECD, instead of one consensus tree, yields considerable reduction of the effects of small sample size and provides a gain of power. PMID:29300759
A quantile count model of water depth constraints on Cape Sable seaside sparrows
Cade, B.S.; Dong, Q.
2008-01-01
1. A quantile regression model for counts of breeding Cape Sable seaside sparrows Ammodramus maritimus mirabilis (L.) as a function of water depth and previous year abundance was developed based on extensive surveys, 1992-2005, in the Florida Everglades. The quantile count model extends linear quantile regression methods to discrete response variables, providing a flexible alternative to discrete parametric distributional models, e.g. Poisson, negative binomial and their zero-inflated counterparts. 2. Estimates from our multiplicative model demonstrated that negative effects of increasing water depth in breeding habitat on sparrow numbers were dependent on recent occupation history. Upper 10th percentiles of counts (one to three sparrows) decreased with increasing water depth from 0 to 30 cm when sites were not occupied in previous years. However, upper 40th percentiles of counts (one to six sparrows) decreased with increasing water depth for sites occupied in previous years. 3. Greatest decreases (-50% to -83%) in upper quantiles of sparrow counts occurred as water depths increased from 0 to 15 cm when previous year counts were 1, but a small proportion of sites (5-10%) held at least one sparrow even as water depths increased to 20 or 30 cm. 4. A zero-inflated Poisson regression model provided estimates of conditional means that also decreased with increasing water depth but rates of change were lower and decreased with increasing previous year counts compared to the quantile count model. Quantiles computed for the zero-inflated Poisson model enhanced interpretation of this model but had greater lack-of-fit for water depths > 0 cm and previous year counts 1, conditions where the negative effect of water depths were readily apparent and fitted better with the quantile count model.
Masterlark, Timothy
2003-01-01
Dislocation models can simulate static deformation caused by slip along a fault. These models usually take the form of a dislocation embedded in a homogeneous, isotropic, Poisson-solid half-space (HIPSHS). However, the widely accepted HIPSHS assumptions poorly approximate subduction zone systems of converging oceanic and continental crust. This study uses three-dimensional finite element models (FEMs) that allow for any combination (including none) of the HIPSHS assumptions to compute synthetic Green's functions for displacement. Using the 1995 Mw = 8.0 Jalisco-Colima, Mexico, subduction zone earthquake and associated measurements from a nearby GPS array as an example, FEM-generated synthetic Green's functions are combined with standard linear inverse methods to estimate dislocation distributions along the subduction interface. Loading a forward HIPSHS model with dislocation distributions, estimated from FEMs that sequentially relax the HIPSHS assumptions, yields the sensitivity of predicted displacements to each of the HIPSHS assumptions. For the subduction zone models tested and the specific field situation considered, sensitivities to the individual Poisson-solid, isotropy, and homogeneity assumptions can be substantially greater than GPS. measurement uncertainties. Forward modeling quantifies stress coupling between the Mw = 8.0 earthquake and a nearby Mw = 6.3 earthquake that occurred 63 days later. Coulomb stress changes predicted from static HIPSHS models cannot account for the 63-day lag time between events. Alternatively, an FEM that includes a poroelastic oceanic crust, which allows for postseismic pore fluid pressure recovery, can account for the lag time. The pore fluid pressure recovery rate puts an upper limit of 10-17 m2 on the bulk permeability of the oceanic crust. Copyright 2003 by the American Geophysical Union.
Exploring the Local Milky Way: M Dwarfs as Tracers of Galactic Populations
2007-12-01
dark matter cos- mology have sought to recreate the infant Galaxy, tracing the formation and collapse of baryons within the dark matter halo (Brook...with dark matter halo and interstellar material contri- butions, and the potential is computed using the Poisson equa- tion. Stars are then evolved
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, F.W.; Sun, Y.C.
1980-11-01
The steady-state solution of the nonlinear Vlasov-Poisson equations is reduced to a nonlinear eigenvalue problem for the case of double-layer (potential drop) boundary conditions. Solutions with no relative electron-ion drifts are found. The kinetic stability is discussed. Suggestions for creating these states in experiments and computer simulations are offered.
Validation of the Poisson Stochastic Radiative Transfer Model
NASA Technical Reports Server (NTRS)
Zhuravleva, Tatiana; Marshak, Alexander
2004-01-01
A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.
Measurement of Poisson's ratio of nonmetallic materials by laser holographic interferometry
NASA Astrophysics Data System (ADS)
Zhu, Jian T.
1991-12-01
By means of the off-axis collimated plane wave coherent light arrangement and a loading device by pure bending, Poisson's ratio values of CFRP (carbon fiber-reinforced plactics plates, lay-up 0 degree(s), 90 degree(s)), GFRP (glass fiber-reinforced plactics plates, radial direction) and PMMA (polymethyl methacrylate, x, y direction) have been measured. In virtue of this study, the ministry standard for the Ministry of Aeronautical Industry (Testing method for the measurement of Poisson's ratio of non-metallic by laser holographic interferometry) has been published. The measurement process is fast and simple. The measuring results are reliable and accurate.
Prediction of forest fires occurrences with area-level Poisson mixed models.
Boubeta, Miguel; Lombardía, María José; Marey-Pérez, Manuel Francisco; Morales, Domingo
2015-05-01
The number of fires in forest areas of Galicia (north-west of Spain) during the summer period is quite high. Local authorities are interested in analyzing the factors that explain this phenomenon. Poisson regression models are good tools for describing and predicting the number of fires per forest areas. This work employs area-level Poisson mixed models for treating real data about fires in forest areas. A parametric bootstrap method is applied for estimating the mean squared errors of fires predictors. The developed methodology and software are applied to a real data set of fires in forest areas of Galicia. Copyright © 2015 Elsevier Ltd. All rights reserved.
A spectral Poisson solver for kinetic plasma simulation
NASA Astrophysics Data System (ADS)
Szeremley, Daniel; Obberath, Jens; Brinkmann, Ralf
2011-10-01
Plasma resonance spectroscopy is a well established plasma diagnostic method, realized in several designs. One of these designs is the multipole resonance probe (MRP). In its idealized - geometrically simplified - version it consists of two dielectrically shielded, hemispherical electrodes to which an RF signal is applied. A numerical tool is under development which is capable of simulating the dynamics of the plasma surrounding the MRP in electrostatic approximation. In this contribution we concentrate on the specialized Poisson solver for that tool. The plasma is represented by an ensemble of point charges. By expanding both the charge density and the potential into spherical harmonics, a largely analytical solution of the Poisson problem can be employed. For a practical implementation, the expansion must be appropriately truncated. With this spectral solver we are able to efficiently solve the Poisson equation in a kinetic plasma simulation without the need of introducing a spatial discretization.
in Mapping of Gastric Cancer Incidence in Iran
Asmarian, Naeimehossadat; Jafari-Koshki, Tohid; Soleimani, Ali; Taghi Ayatollahi, Seyyed Mohammad
2016-10-01
Background: In many countries gastric cancer has the highest incidence among the gastrointestinal cancers and is the second most common cancer in Iran. The aim of this study was to identify and map high risk gastric cancer regions at the county-level in Iran. Methods: In this study we analyzed gastric cancer data for Iran in the years 2003-2010. Areato- area Poisson kriging and Besag, York and Mollie (BYM) spatial models were applied to smoothing the standardized incidence ratios of gastric cancer for the 373 counties surveyed in this study. The two methods were compared in term of accuracy and precision in identifying high risk regions. Result: The highest smoothed standardized incidence rate (SIR) according to area-to-area Poisson kriging was in Meshkinshahr county in Ardabil province in north-western Iran (2.4,SD=0.05), while the highest smoothed standardized incidence rate (SIR) according to the BYM model was in Ardabil, the capital of that province (2.9,SD=0.09). Conclusion: Both methods of mapping, ATA Poisson kriging and BYM, showed the gastric cancer incidence rate to be highest in north and north-west Iran. However, area-to-area Poisson kriging was more precise than the BYM model and required less smoothing. According to the results obtained, preventive measures and treatment programs should be focused on particular counties of Iran. Creative Commons Attribution License
New method for blowup of the Euler-Poisson system
NASA Astrophysics Data System (ADS)
Kwong, Man Kam; Yuen, Manwai
2016-08-01
In this paper, we provide a new method for establishing the blowup of C2 solutions for the pressureless Euler-Poisson system with attractive forces for RN (N ≥ 2) with ρ(0, x0) > 0 and Ω 0 i j ( x 0 ) = /1 2 [" separators=" ∂ i u j ( 0 , x 0 ) - ∂ j u i ( 0 , x 0 ) ] = 0 at some point x0 ∈ RN. By applying the generalized Hubble transformation div u ( t , x 0 ( t ) ) = /N a ˙ ( t ) a ( t ) to a reduced Riccati differential inequality derived from the system, we simplify the inequality into the Emden equation a ̈ ( t ) = - /λ a ( t ) N - 1 , a ( 0 ) = 1 , a ˙ ( 0 ) = /div u ( 0 , x 0 ) N . Known results on its blowup set allow us to easily obtain the blowup conditions of the Euler-Poisson system.
Statistical shape analysis using 3D Poisson equation--A quantitatively validated approach.
Gao, Yi; Bouix, Sylvain
2016-05-01
Statistical shape analysis has been an important area of research with applications in biology, anatomy, neuroscience, agriculture, paleontology, etc. Unfortunately, the proposed methods are rarely quantitatively evaluated, and as shown in recent studies, when they are evaluated, significant discrepancies exist in their outputs. In this work, we concentrate on the problem of finding the consistent location of deformation between two population of shapes. We propose a new shape analysis algorithm along with a framework to perform a quantitative evaluation of its performance. Specifically, the algorithm constructs a Signed Poisson Map (SPoM) by solving two Poisson equations on the volumetric shapes of arbitrary topology, and statistical analysis is then carried out on the SPoMs. The method is quantitatively evaluated on synthetic shapes and applied on real shape data sets in brain structures. Copyright © 2016 Elsevier B.V. All rights reserved.
Lot quality assurance sampling (LQAS) for monitoring a leprosy elimination program.
Gupte, M D; Narasimhamurthy, B
1999-06-01
In a statistical sense, prevalences of leprosy in different geographical areas can be called very low or rare. Conventional survey methods to monitor leprosy control programs, therefore, need large sample sizes, are expensive, and are time-consuming. Further, with the lowering of prevalence to the near-desired target level, 1 case per 10,000 population at national or subnational levels, the program administrator's concern will be shifted to smaller areas, e.g., districts, for assessment and, if needed, for necessary interventions. In this paper, Lot Quality Assurance Sampling (LQAS), a quality control tool in industry, is proposed to identify districts/regions having a prevalence of leprosy at or above a certain target level, e.g., 1 in 10,000. This technique can also be considered for identifying districts/regions at or below the target level of 1 per 10,000, i.e., areas where the elimination level is attained. For simulating various situations and strategies, a hypothetical computerized population of 10 million persons was created. This population mimics the actual population in terms of the empirical information on rural/urban distributions and the distribution of households by size for the state of Tamil Nadu, India. Various levels with respect to leprosy prevalence are created using this population. The distribution of the number of cases in the population was expected to follow the Poisson process, and this was also confirmed by examination. Sample sizes and corresponding critical values were computed using Poisson approximation. Initially, villages/towns are selected from the population and from each selected village/town households are selected using systematic sampling. Households instead of individuals are used as sampling units. This sampling procedure was simulated 1000 times in the computer from the base population. The results in four different prevalence situations meet the required limits of Type I error of 5% and 90% Power. It is concluded that after validation under field conditions, this method can be considered for a rapid assessment of the leprosy situation.
Variational multiscale models for charge transport.
Wei, Guo-Wei; Zheng, Qiong; Chen, Zhan; Xia, Kelin
2012-01-01
This work presents a few variational multiscale models for charge transport in complex physical, chemical and biological systems and engineering devices, such as fuel cells, solar cells, battery cells, nanofluidics, transistors and ion channels. An essential ingredient of the present models, introduced in an earlier paper (Bulletin of Mathematical Biology, 72, 1562-1622, 2010), is the use of differential geometry theory of surfaces as a natural means to geometrically separate the macroscopic domain from the microscopic domain, meanwhile, dynamically couple discrete and continuum descriptions. Our main strategy is to construct the total energy functional of a charge transport system to encompass the polar and nonpolar free energies of solvation, and chemical potential related energy. By using the Euler-Lagrange variation, coupled Laplace-Beltrami and Poisson-Nernst-Planck (LB-PNP) equations are derived. The solution of the LB-PNP equations leads to the minimization of the total free energy, and explicit profiles of electrostatic potential and densities of charge species. To further reduce the computational complexity, the Boltzmann distribution obtained from the Poisson-Boltzmann (PB) equation is utilized to represent the densities of certain charge species so as to avoid the computationally expensive solution of some Nernst-Planck (NP) equations. Consequently, the coupled Laplace-Beltrami and Poisson-Boltzmann-Nernst-Planck (LB-PBNP) equations are proposed for charge transport in heterogeneous systems. A major emphasis of the present formulation is the consistency between equilibrium LB-PB theory and non-equilibrium LB-PNP theory at equilibrium. Another major emphasis is the capability of the reduced LB-PBNP model to fully recover the prediction of the LB-PNP model at non-equilibrium settings. To account for the fluid impact on the charge transport, we derive coupled Laplace-Beltrami, Poisson-Nernst-Planck and Navier-Stokes equations from the variational principle for chemo-electro-fluid systems. A number of computational algorithms is developed to implement the proposed new variational multiscale models in an efficient manner. A set of ten protein molecules and a realistic ion channel, Gramicidin A, are employed to confirm the consistency and verify the capability. Extensive numerical experiment is designed to validate the proposed variational multiscale models. A good quantitative agreement between our model prediction and the experimental measurement of current-voltage curves is observed for the Gramicidin A channel transport. This paper also provides a brief review of the field.
Variational multiscale models for charge transport
Wei, Guo-Wei; Zheng, Qiong; Chen, Zhan; Xia, Kelin
2012-01-01
This work presents a few variational multiscale models for charge transport in complex physical, chemical and biological systems and engineering devices, such as fuel cells, solar cells, battery cells, nanofluidics, transistors and ion channels. An essential ingredient of the present models, introduced in an earlier paper (Bulletin of Mathematical Biology, 72, 1562-1622, 2010), is the use of differential geometry theory of surfaces as a natural means to geometrically separate the macroscopic domain from the microscopic domain, meanwhile, dynamically couple discrete and continuum descriptions. Our main strategy is to construct the total energy functional of a charge transport system to encompass the polar and nonpolar free energies of solvation, and chemical potential related energy. By using the Euler-Lagrange variation, coupled Laplace-Beltrami and Poisson-Nernst-Planck (LB-PNP) equations are derived. The solution of the LB-PNP equations leads to the minimization of the total free energy, and explicit profiles of electrostatic potential and densities of charge species. To further reduce the computational complexity, the Boltzmann distribution obtained from the Poisson-Boltzmann (PB) equation is utilized to represent the densities of certain charge species so as to avoid the computationally expensive solution of some Nernst-Planck (NP) equations. Consequently, the coupled Laplace-Beltrami and Poisson-Boltzmann-Nernst-Planck (LB-PBNP) equations are proposed for charge transport in heterogeneous systems. A major emphasis of the present formulation is the consistency between equilibrium LB-PB theory and non-equilibrium LB-PNP theory at equilibrium. Another major emphasis is the capability of the reduced LB-PBNP model to fully recover the prediction of the LB-PNP model at non-equilibrium settings. To account for the fluid impact on the charge transport, we derive coupled Laplace-Beltrami, Poisson-Nernst-Planck and Navier-Stokes equations from the variational principle for chemo-electro-fluid systems. A number of computational algorithms is developed to implement the proposed new variational multiscale models in an efficient manner. A set of ten protein molecules and a realistic ion channel, Gramicidin A, are employed to confirm the consistency and verify the capability. Extensive numerical experiment is designed to validate the proposed variational multiscale models. A good quantitative agreement between our model prediction and the experimental measurement of current-voltage curves is observed for the Gramicidin A channel transport. This paper also provides a brief review of the field. PMID:23172978
Analyzing hospitalization data: potential limitations of Poisson regression.
Weaver, Colin G; Ravani, Pietro; Oliver, Matthew J; Austin, Peter C; Quinn, Robert R
2015-08-01
Poisson regression is commonly used to analyze hospitalization data when outcomes are expressed as counts (e.g. number of days in hospital). However, data often violate the assumptions on which Poisson regression is based. More appropriate extensions of this model, while available, are rarely used. We compared hospitalization data between 206 patients treated with hemodialysis (HD) and 107 treated with peritoneal dialysis (PD) using Poisson regression and compared results from standard Poisson regression with those obtained using three other approaches for modeling count data: negative binomial (NB) regression, zero-inflated Poisson (ZIP) regression and zero-inflated negative binomial (ZINB) regression. We examined the appropriateness of each model and compared the results obtained with each approach. During a mean 1.9 years of follow-up, 183 of 313 patients (58%) were never hospitalized (indicating an excess of 'zeros'). The data also displayed overdispersion (variance greater than mean), violating another assumption of the Poisson model. Using four criteria, we determined that the NB and ZINB models performed best. According to these two models, patients treated with HD experienced similar hospitalization rates as those receiving PD {NB rate ratio (RR): 1.04 [bootstrapped 95% confidence interval (CI): 0.49-2.20]; ZINB summary RR: 1.21 (bootstrapped 95% CI 0.60-2.46)}. Poisson and ZIP models fit the data poorly and had much larger point estimates than the NB and ZINB models [Poisson RR: 1.93 (bootstrapped 95% CI 0.88-4.23); ZIP summary RR: 1.84 (bootstrapped 95% CI 0.88-3.84)]. We found substantially different results when modeling hospitalization data, depending on the approach used. Our results argue strongly for a sound model selection process and improved reporting around statistical methods used for modeling count data. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Multiscale analysis of neural spike trains.
Ramezan, Reza; Marriott, Paul; Chenouri, Shojaeddin
2014-01-30
This paper studies the multiscale analysis of neural spike trains, through both graphical and Poisson process approaches. We introduce the interspike interval plot, which simultaneously visualizes characteristics of neural spiking activity at different time scales. Using an inhomogeneous Poisson process framework, we discuss multiscale estimates of the intensity functions of spike trains. We also introduce the windowing effect for two multiscale methods. Using quasi-likelihood, we develop bootstrap confidence intervals for the multiscale intensity function. We provide a cross-validation scheme, to choose the tuning parameters, and study its unbiasedness. Studying the relationship between the spike rate and the stimulus signal, we observe that adjusting for the first spike latency is important in cross-validation. We show, through examples, that the correlation between spike trains and spike count variability can be multiscale phenomena. Furthermore, we address the modeling of the periodicity of the spike trains caused by a stimulus signal or by brain rhythms. Within the multiscale framework, we introduce intensity functions for spike trains with multiplicative and additive periodic components. Analyzing a dataset from the retinogeniculate synapse, we compare the fit of these models with the Bayesian adaptive regression splines method and discuss the limitations of the methodology. Computational efficiency, which is usually a challenge in the analysis of spike trains, is one of the highlights of these new models. In an example, we show that the reconstruction quality of a complex intensity function demonstrates the ability of the multiscale methodology to crack the neural code. Copyright © 2013 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.
Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less
Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.
2015-11-19
Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less
Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji
2017-01-01
We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392
Generating clustered scale-free networks using Poisson based localization of edges
NASA Astrophysics Data System (ADS)
Türker, İlker
2018-05-01
We introduce a variety of network models using a Poisson-based edge localization strategy, which result in clustered scale-free topologies. We first verify the success of our localization strategy by realizing a variant of the well-known Watts-Strogatz model with an inverse approach, implying a small-world regime of rewiring from a random network through a regular one. We then apply the rewiring strategy to a pure Barabasi-Albert model and successfully achieve a small-world regime, with a limited capacity of scale-free property. To imitate the high clustering property of scale-free networks with higher accuracy, we adapted the Poisson-based wiring strategy to a growing network with the ingredients of both preferential attachment and local connectivity. To achieve the collocation of these properties, we used a routine of flattening the edges array, sorting it, and applying a mixing procedure to assemble both global connections with preferential attachment and local clusters. As a result, we achieved clustered scale-free networks with a computational fashion, diverging from the recent studies by following a simple but efficient approach.
Localization of intense electromagnetic waves in a relativistically hot plasma.
Shukla, P K; Eliasson, B
2005-02-18
We consider nonlinear interactions between intense short electromagnetic waves (EMWs) and a relativistically hot electron plasma that supports relativistic electron holes (REHs). It is shown that such EMW-REH interactions are governed by a coupled nonlinear system of equations composed of a nonlinear Schro dinger equation describing the dynamics of the EMWs and the Poisson-relativistic Vlasov system describing the dynamics of driven REHs. The present nonlinear system of equations admits both a linearly trapped discrete number of eigenmodes of the EMWs in a quasistationary REH and a modification of the REH by large-amplitude trapped EMWs. Computer simulations of the relativistic Vlasov and Maxwell-Poisson system of equations show complex interactions between REHs loaded with localized EMWs.
Oscillatory Reduction in Option Pricing Formula Using Shifted Poisson and Linear Approximation
NASA Astrophysics Data System (ADS)
Nur Rachmawati, Ro'fah; Irene; Budiharto, Widodo
2014-03-01
Option is one of derivative instruments that can help investors improve their expected return and minimize the risks. However, the Black-Scholes formula is generally used in determining the price of the option does not involve skewness factor and it is difficult to apply in computing process because it produces oscillation for the skewness values close to zero. In this paper, we construct option pricing formula that involve skewness by modified Black-Scholes formula using Shifted Poisson model and transformed it into the form of a Linear Approximation in the complete market to reduce the oscillation. The results are Linear Approximation formula can predict the price of an option with very accurate and successfully reduce the oscillations in the calculation processes.
An analytical method for the inverse Cauchy problem of Lame equation in a rectangle
NASA Astrophysics Data System (ADS)
Grigor’ev, Yu
2018-04-01
In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.
Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method
Zhang, Tingting; Kou, S. C.
2010-01-01
Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure. PMID:21258615
Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method.
Zhang, Tingting; Kou, S C
2010-01-01
Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure.
Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.
Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie
2017-07-01
For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α<β and β>1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | α̂, β̂) as a validity measure (r is a random variable that follows a gamma distribution; α̂ and β̂ are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions β̂>(22α̂)0.50 for 0.02<α̂<2 as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | α̂, β̂) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | α̂, β̂), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta-Poisson model dose-response curve. © 2016 Society for Risk Analysis.
Intermediate quantum maps for quantum computation
NASA Astrophysics Data System (ADS)
Giraud, O.; Georgeot, B.
2005-10-01
We study quantum maps displaying spectral statistics intermediate between Poisson and Wigner-Dyson. It is shown that they can be simulated on a quantum computer with a small number of gates, and efficiently yield information about fidelity decay or spectral statistics. We study their matrix elements and entanglement production and show that they converge with time to distributions which differ from random matrix predictions. A randomized version of these maps can be implemented even more economically and yields pseudorandom operators with original properties, enabling, for example, one to produce fractal random vectors. These algorithms are within reach of present-day quantum computers.
The Ionic Atmosphere around A-RNA: Poisson-Boltzmann and Molecular Dynamics Simulations
Kirmizialtin, Serdal; Silalahi, Alexander R.J.; Elber, Ron; Fenley, Marcia O.
2012-01-01
The distributions of different cations around A-RNA are computed by Poisson-Boltzmann (PB) equation and replica exchange molecular dynamics (MD). Both the nonlinear PB and size-modified PB theories are considered. The number of ions bound to A-RNA, which can be measured experimentally, is well reproduced in all methods. On the other hand, the radial ion distribution profiles show differences between MD and PB. We showed that PB results are sensitive to ion size and functional form of the solvent dielectric region but not the solvent dielectric boundary definition. Size-modified PB agrees with replica exchange molecular dynamics much better than nonlinear PB when the ion sizes are chosen from atomistic simulations. The distribution of ions 14 Å away from the RNA central axis are reasonably well reproduced by size-modified PB for all ion types with a uniform solvent dielectric model and a sharp dielectric boundary between solvent and RNA. However, this model does not agree with MD for shorter distances from the A-RNA. A distance-dependent solvent dielectric function proposed by another research group improves the agreement for sodium and strontium ions, even for shorter distances from the A-RNA. However, Mg2+ distributions are still at significant variances for shorter distances. PMID:22385854
NASA Astrophysics Data System (ADS)
Tzoupis, Haralambos; Leonis, Georgios; Durdagi, Serdar; Mouchlis, Varnavas; Mavromoustakos, Thomas; Papadopoulos, Manthos G.
2011-10-01
The objectives of this study include the design of a series of novel fullerene-based inhibitors for HIV-1 protease (HIV-1 PR), by employing two strategies that can also be applied to the design of inhibitors for any other target. Additionally, the interactions which contribute to the observed exceptionally high binding free energies were analyzed. In particular, we investigated: (1) hydrogen bonding (H-bond) interactions between specific fullerene derivatives and the protease, (2) the regions of HIV-1 PR that play a significant role in binding, (3) protease changes upon binding and (4) various contributions to the binding free energy, in order to identify the most significant of them. This study has been performed by employing a docking technique, two 3D-QSAR models, molecular dynamics (MD) simulations and the molecular mechanics Poisson-Boltzmann surface area (MM-PBSA) method. Our computed binding free energies are in satisfactory agreement with the experimental results. The suitability of specific fullerene derivatives as drug candidates was further enhanced, after ADMET (absorption, distribution, metabolism, excretion and toxicity) properties have been estimated to be promising. The outcomes of this study revealed important protein-ligand interaction patterns that may lead towards the development of novel, potent HIV-1 PR inhibitors.
Martina, R; Kay, R; van Maanen, R; Ridder, A
2015-01-01
Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.
Estimating relative risks for common outcome using PROC NLP.
Yu, Binbing; Wang, Zhuoqiao
2008-05-01
In cross-sectional or cohort studies with binary outcomes, it is biologically interpretable and of interest to estimate the relative risk or prevalence ratio, especially when the response rates are not rare. Several methods have been used to estimate the relative risk, among which the log-binomial models yield the maximum likelihood estimate (MLE) of the parameters. Because of restrictions on the parameter space, the log-binomial models often run into convergence problems. Some remedies, e.g., the Poisson and Cox regressions, have been proposed. However, these methods may give out-of-bound predicted response probabilities. In this paper, a new computation method using the SAS Nonlinear Programming (NLP) procedure is proposed to find the MLEs. The proposed NLP method was compared to the COPY method, a modified method to fit the log-binomial model. Issues in the implementation are discussed. For illustration, both methods were applied to data on the prevalence of microalbuminuria (micro-protein leakage into urine) for kidney disease patients from the Diabetes Control and Complications Trial. The sample SAS macro for calculating relative risk is provided in the appendix.
NASA Astrophysics Data System (ADS)
Katsaounis, T. D.
2005-02-01
The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. The first chapter is an introduction to parallel processing. It covers fundamentals of parallel processing in a simple and concrete way and no prior knowledge of the subject is required. Examples of parallel implementation of basic linear algebra operations are presented using the Message Passing Interface (MPI) programming environment. Here, some knowledge of MPI routines is required by the reader. Examples solving in parallel simple PDEs using Diffpack and MPI are also presented. Chapter 2 presents the overlapping domain decomposition method for solving PDEs. It is well known that these methods are suitable for parallel processing. The first part of the chapter covers the mathematical formulation of the method as well as algorithmic and implementational issues. The second part presents a serial and a parallel implementational framework within the programming environment of Diffpack. The chapter closes by showing how to solve two application examples with the overlapping domain decomposition method using Diffpack. Chapter 3 is a tutorial about how to incorporate the multigrid solver in Diffpack. The method is illustrated by examples such as a Poisson solver, a general elliptic problem with various types of boundary conditions and a nonlinear Poisson type problem. In chapter 4 the mixed finite element is introduced. Technical issues concerning the practical implementation of the method are also presented. The main difficulties of the efficient implementation of the method, especially in two and three space dimensions on unstructured grids, are presented and addressed in the framework of Diffpack. The implementational process is illustrated by two examples, namely the system formulation of the Poisson problem and the Stokes problem. Chapter 5 is closely related to chapter 4 and addresses the problem of how to solve efficiently the linear systems arising by the application of the mixed finite element method. The proposed method is block preconditioning. Efficient techniques for implementing the method within Diffpack are presented. Optimal block preconditioners are used to solve the system formulation of the Poisson problem, the Stokes problem and the bidomain model for the electrical activity in the heart. The subject of chapter 6 is systems of PDEs. Linear and nonlinear systems are discussed. Fully implicit and operator splitting methods are presented. Special attention is paid to how existing solvers for scalar equations in Diffpack can be used to derive fully implicit solvers for systems. The proposed techniques are illustrated in terms of two applications, namely a system of PDEs modelling pipeflow and a two-phase porous media flow. Stochastic PDEs is the topic of chapter 7. The first part of the chapter is a simple introduction to stochastic PDEs; basic analytical properties are presented for simple models like transport phenomena and viscous drag forces. The second part considers the numerical solution of stochastic PDEs. Two basic techniques are presented, namely Monte Carlo and perturbation methods. The last part explains how to implement and incorporate these solvers into Diffpack. Chapter 8 describes how to operate Diffpack from Python scripts. The main goal here is to provide all the programming and technical details in order to glue the programming environment of Diffpack with visualization packages through Python and in general take advantage of the Python interfaces. Chapter 9 attempts to show how to use numerical experiments to measure the performance of various PDE solvers. The authors gathered a rather impressive list, a total of 14 PDE solvers. Solvers for problems like Poisson, Navier--Stokes, elasticity, two-phase flows and methods such as finite difference, finite element, multigrid, and gradient type methods are presented. The authors provide a series of numerical results combining various solvers with various methods in order to gain insight into their computational performance and efficiency. In Chapter 10 the authors consider a computationally challenging problem, namely the computation of the electrical activity of the human heart. After a brief introduction on the biology of the problem the authors present the mathematical models involved and a numerical method for solving them within the framework of Diffpack. Chapter 11 and 12 are closely related; actually they could have been combined in a single chapter. Chapter 11 introduces several mathematical models used in finance, based on the Black--Scholes equation. Chapter 12 considers several numerical methods like Monte Carlo, lattice methods, finite difference and finite element methods. Implementation of these methods within Diffpack is presented in the last part of the chapter. Chapter 13 presents how the finite element method is used for the modelling and analysis of elastic structures. The authors describe the structural elements of Diffpack which include popular elements such as beams and plates and examples are presented on how to use them to simulate elastic structures. Chapter 14 describes an application problem, namely the extrusion of aluminum. This is a rather\\endcolumn complicated process which involves non-Newtonian flow, heat transfer and elasticity. The authors describe the systems of PDEs modelling the underlying process and use a finite element method to obtain a numerical solution. The implementation of the numerical method in Diffpack is presented along with some applications. The last chapter, chapter 15, focuses on mathematical and numerical models of systems of PDEs governing geological processes in sedimentary basins. The underlying mathematical model is solved using the finite element method within a fully implicit scheme. The authors discuss the implementational issues involved within Diffpack and they present results from several examples. In summary, the book focuses on the computational and implementational issues involved in solving partial differential equations. The potential reader should have a basic knowledge of PDEs and the finite difference and finite element methods. The examples presented are solved within the programming framework of Diffpack and the reader should have prior experience with the particular software in order to take full advantage of the book. Overall the book is well written, the subject of each chapter is well presented and can serve as a reference for graduate students, researchers and engineers who are interested in the numerical solution of partial differential equations modelling various applications.
A Legendre–Fourier spectral method with exact conservation laws for the Vlasov–Poisson system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manzini, Gianmarco; Delzanno, Gian Luca; Vencels, Juris
In this study, we present the design and implementation of an L 2-stable spectral method for the discretization of the Vlasov–Poisson model of a collisionless plasma in one space and velocity dimension. The velocity and space dependence of the Vlasov equation are resolved through a truncated spectral expansion based on Legendre and Fourier basis functions, respectively. The Poisson equation, which is coupled to the Vlasov equation, is also resolved through a Fourier expansion. The resulting system of ordinary differential equation is discretized by the implicit second-order accurate Crank–Nicolson time discretization. The non-linear dependence between the Vlasov and Poisson equations ismore » iteratively solved at any time cycle by a Jacobian-Free Newton–Krylov method. In this work we analyze the structure of the main conservation laws of the resulting Legendre–Fourier model, e.g., mass, momentum, and energy, and prove that they are exactly satisfied in the semi-discrete and discrete setting. The L 2-stability of the method is ensured by discretizing the boundary conditions of the distribution function at the boundaries of the velocity domain by a suitable penalty term. The impact of the penalty term on the conservation properties is investigated theoretically and numerically. An implementation of the penalty term that does not affect the conservation of mass, momentum and energy, is also proposed and studied. A collisional term is introduced in the discrete model to control the filamentation effect, but does not affect the conservation properties of the system. Numerical results on a set of standard test problems illustrate the performance of the method.« less
A Legendre–Fourier spectral method with exact conservation laws for the Vlasov–Poisson system
Manzini, Gianmarco; Delzanno, Gian Luca; Vencels, Juris; ...
2016-04-22
In this study, we present the design and implementation of an L 2-stable spectral method for the discretization of the Vlasov–Poisson model of a collisionless plasma in one space and velocity dimension. The velocity and space dependence of the Vlasov equation are resolved through a truncated spectral expansion based on Legendre and Fourier basis functions, respectively. The Poisson equation, which is coupled to the Vlasov equation, is also resolved through a Fourier expansion. The resulting system of ordinary differential equation is discretized by the implicit second-order accurate Crank–Nicolson time discretization. The non-linear dependence between the Vlasov and Poisson equations ismore » iteratively solved at any time cycle by a Jacobian-Free Newton–Krylov method. In this work we analyze the structure of the main conservation laws of the resulting Legendre–Fourier model, e.g., mass, momentum, and energy, and prove that they are exactly satisfied in the semi-discrete and discrete setting. The L 2-stability of the method is ensured by discretizing the boundary conditions of the distribution function at the boundaries of the velocity domain by a suitable penalty term. The impact of the penalty term on the conservation properties is investigated theoretically and numerically. An implementation of the penalty term that does not affect the conservation of mass, momentum and energy, is also proposed and studied. A collisional term is introduced in the discrete model to control the filamentation effect, but does not affect the conservation properties of the system. Numerical results on a set of standard test problems illustrate the performance of the method.« less
The impact of short term synaptic depression and stochastic vesicle dynamics on neuronal variability
Reich, Steven
2014-01-01
Neuronal variability plays a central role in neural coding and impacts the dynamics of neuronal networks. Unreliability of synaptic transmission is a major source of neural variability: synaptic neurotransmitter vesicles are released probabilistically in response to presynaptic action potentials and are recovered stochastically in time. The dynamics of this process of vesicle release and recovery interacts with variability in the arrival times of presynaptic spikes to shape the variability of the postsynaptic response. We use continuous time Markov chain methods to analyze a model of short term synaptic depression with stochastic vesicle dynamics coupled with three different models of presynaptic spiking: one model in which the timing of presynaptic action potentials are modeled as a Poisson process, one in which action potentials occur more regularly than a Poisson process (sub-Poisson) and one in which action potentials occur more irregularly (super-Poisson). We use this analysis to investigate how variability in a presynaptic spike train is transformed by short term depression and stochastic vesicle dynamics to determine the variability of the postsynaptic response. We find that sub-Poisson presynaptic spiking increases the average rate at which vesicles are released, that the number of vesicles released over a time window is more variable for smaller time windows than larger time windows and that fast presynaptic spiking gives rise to Poisson-like variability of the postsynaptic response even when presynaptic spike times are non-Poisson. Our results complement and extend previously reported theoretical results and provide possible explanations for some trends observed in recorded data. PMID:23354693
NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.
Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Fatoohi, Rod A.
1990-01-01
The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.
NASA Astrophysics Data System (ADS)
Ibrahim, R. S.; El-Kalaawy, O. H.
2006-10-01
The relativistic nonlinear self-consistent equations for a collisionless cold plasma with stationary ions [R. S. Ibrahim, IMA J. Appl. Math. 68, 523 (2003)] are extended to 3 and 3+1 dimensions. The resulting system of equations is reduced to the sine-Poisson equation. The truncated Painlevé expansion and reduction of the partial differential equation to a quadrature problem (RQ method) are described and applied to obtain the traveling wave solutions of the sine-Poisson equation for stationary and nonstationary equations in 3 and 3+1 dimensions describing the charge-density equilibrium configuration model.
NASA Astrophysics Data System (ADS)
Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.
2017-07-01
We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.
Stability of Poisson Equilibria and Hamiltonian Relative Equilibria by Energy Methods
NASA Astrophysics Data System (ADS)
Patrick, George W.; Roberts, Mark; Wulff, Claudia
2004-12-01
We develop a general stability theory for equilibrium points of Poisson dynamical systems and relative equilibria of Hamiltonian systems with symmetries, including several generalisations of the Energy-Casimir and Energy-Momentum Methods. Using a topological generalisation of Lyapunov’s result that an extremal critical point of a conserved quantity is stable, we show that a Poisson equilibrium is stable if it is an isolated point in the intersection of a level set of a conserved function with a subset of the phase space that is related to the topology of the symplectic leaf space at that point. This criterion is applied to generalise the energy-momentum method to Hamiltonian systems which are invariant under non-compact symmetry groups for which the coadjoint orbit space is not Hausdorff. We also show that a G-stable relative equilibrium satisfies the stronger condition of being A-stable, where A is a specific group-theoretically defined subset of G which contains the momentum isotropy subgroup of the relative equilibrium. The results are illustrated by an application to the stability of a rigid body in an ideal irrotational fluid.
Beta-Poisson model for single-cell RNA-seq data analyses.
Vu, Trung Nghia; Wills, Quin F; Kalari, Krishna R; Niu, Nifang; Wang, Liewei; Rantalainen, Mattias; Pawitan, Yudi
2016-07-15
Single-cell RNA-sequencing technology allows detection of gene expression at the single-cell level. One typical feature of the data is a bimodality in the cellular distribution even for highly expressed genes, primarily caused by a proportion of non-expressing cells. The standard and the over-dispersed gamma-Poisson models that are commonly used in bulk-cell RNA-sequencing are not able to capture this property. We introduce a beta-Poisson mixture model that can capture the bimodality of the single-cell gene expression distribution. We further integrate the model into the generalized linear model framework in order to perform differential expression analyses. The whole analytical procedure is called BPSC. The results from several real single-cell RNA-seq datasets indicate that ∼90% of the transcripts are well characterized by the beta-Poisson model; the model-fit from BPSC is better than the fit of the standard gamma-Poisson model in > 80% of the transcripts. Moreover, in differential expression analyses of simulated and real datasets, BPSC performs well against edgeR, a conventional method widely used in bulk-cell RNA-sequencing data, and against scde and MAST, two recent methods specifically designed for single-cell RNA-seq data. An R package BPSC for model fitting and differential expression analyses of single-cell RNA-seq data is available under GPL-3 license at https://github.com/nghiavtr/BPSC CONTACT: yudi.pawitan@ki.se or mattias.rantalainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Chen, T.; Raju, I. S.
2002-01-01
A coupled finite element (FE) method and meshless local Petrov-Galerkin (MLPG) method for analyzing two-dimensional potential problems is presented in this paper. The analysis domain is subdivided into two regions, a finite element (FE) region and a meshless (MM) region. A single weighted residual form is written for the entire domain. Independent trial and test functions are assumed in the FE and MM regions. A transition region is created between the two regions. The transition region blends the trial and test functions of the FE and MM regions. The trial function blending is achieved using a technique similar to the 'Coons patch' method that is widely used in computer-aided geometric design. The test function blending is achieved by using either FE or MM test functions on the nodes in the transition element. The technique was evaluated by applying the coupled method to two potential problems governed by the Poisson equation. The coupled method passed all the patch test problems and gave accurate solutions for the problems studied.
Poisson noise removal with pyramidal multi-scale transforms
NASA Astrophysics Data System (ADS)
Woiselle, Arnaud; Starck, Jean-Luc; Fadili, Jalal M.
2013-09-01
In this paper, we introduce a method to stabilize the variance of decimated transforms using one or two variance stabilizing transforms (VST). These VSTs are applied to the 3-D Meyer wavelet pyramidal transform which is the core of the first generation 3D curvelets. This allows us to extend these 3-D curvelets to handle Poisson noise, that we apply to the denoising of a simulated cosmological volume.
Accelerated gradient methods for the x-ray imaging of solar flares
NASA Astrophysics Data System (ADS)
Bonettini, S.; Prato, M.
2014-05-01
In this paper we present new optimization strategies for the reconstruction of x-ray images of solar flares by means of the data collected by the Reuven Ramaty high energy solar spectroscopic imager. The imaging concept of the satellite is based on rotating modulation collimator instruments, which allow the use of both Fourier imaging approaches and reconstruction techniques based on the straightforward inversion of the modulated count profiles. Although in the last decade, greater attention has been devoted to the former strategies due to their very limited computational cost, here we consider the latter model and investigate the effectiveness of different accelerated gradient methods for the solution of the corresponding constrained minimization problem. Moreover, regularization is introduced through either an early stopping of the iterative procedure, or a Tikhonov term added to the discrepancy function by means of a discrepancy principle accounting for the Poisson nature of the noise affecting the data.
Spatiotemporal reconstruction of list-mode PET data.
Nichols, Thomas E; Qi, Jinyi; Asma, Evren; Leahy, Richard M
2002-04-01
We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using 11C-raclopride.
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.
2000-01-01
A research program is in progress to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to impact loads. Previously, strain rate dependent inelastic constitutive equations developed to model the polymer matrix were implemented into a mechanics of materials based micromechanics method. In the current work, the computation of the effective inelastic strain in the micromechanics model was modified to fully incorporate the Poisson effect. The micromechanics equations were also combined with classical laminate theory to enable the analysis of symmetric multilayered laminates subject to in-plane loading. A quasi-incremental trapezoidal integration method was implemented to integrate the constitutive equations within the laminate theory. Verification studies were conducted using an AS4/PEEK composite using a variety of laminate configurations and strain rates. The predicted results compared well with experimentally obtained values.
The Precambrian crustal structure of East Africa
NASA Astrophysics Data System (ADS)
Young, A. J.; Tugume, F.; Nyblade, A.; Julia, J.; Mulibo, G.
2011-12-01
We present new results on crustal structure from East Africa from analyzing P wave receiver functions. The data for this study come from temporary AfricaArray broadband seismic stations deployed between 2007 and 2011 in Uganda, Tanzania and Zambia. Receiver functions have been computed using an iterative deconvolution method. Crustal structure has been imaged using the H-k stacking method and by jointly inverting the receiver functions and surface wave phase and group velocities. The results show remarkably uniform crust throughout the Archean and Proterozoic terrains that comprise the Precambrian tectonic framework of the region. Crustal thickness for most terrains is between 37 and 40 km, and Poisson's ratio is between 0.25 and 0.27. Results from the joint inversion yield average crustal Vs values of 3.6 to 3.7 km/s. For most terrains, a thin (1-5 km) thick high velocity (Vs>4.0 km/s) is found at the base of the crust.
QCAD simulation and optimization of semiconductor double quantum dots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Erik; Gao, Xujiao; Kalashnikova, Irina
2013-12-01
We present the Quantum Computer Aided Design (QCAD) simulator that targets modeling quantum devices, particularly silicon double quantum dots (DQDs) developed for quantum qubits. The simulator has three di erentiating features: (i) its core contains nonlinear Poisson, e ective mass Schrodinger, and Con guration Interaction solvers that have massively parallel capability for high simulation throughput, and can be run individually or combined self-consistently for 1D/2D/3D quantum devices; (ii) the core solvers show superior convergence even at near-zero-Kelvin temperatures, which is critical for modeling quantum computing devices; (iii) it couples with an optimization engine Dakota that enables optimization of gate voltagesmore » in DQDs for multiple desired targets. The Poisson solver includes Maxwell- Boltzmann and Fermi-Dirac statistics, supports Dirichlet, Neumann, interface charge, and Robin boundary conditions, and includes the e ect of dopant incomplete ionization. The solver has shown robust nonlinear convergence even in the milli-Kelvin temperature range, and has been extensively used to quickly obtain the semiclassical electrostatic potential in DQD devices. The self-consistent Schrodinger-Poisson solver has achieved robust and monotonic convergence behavior for 1D/2D/3D quantum devices at very low temperatures by using a predictor-correct iteration scheme. The QCAD simulator enables the calculation of dot-to-gate capacitances, and comparison with experiment and between solvers. It is observed that computed capacitances are in the right ballpark when compared to experiment, and quantum con nement increases capacitance when the number of electrons is xed in a quantum dot. In addition, the coupling of QCAD with Dakota allows to rapidly identify which device layouts are more likely leading to few-electron quantum dots. Very efficient QCAD simulations on a large number of fabricated and proposed Si DQDs have made it possible to provide fast feedback for design comparison and optimization.« less
A Ruby in the Rubbish: Beneficial Mutations, Deleterious Mutations and the Evolution of Sex
Peck, J. R.
1994-01-01
This study presents a mathematical model in which a single beneficial mutation arises in a very large population that is subject to frequent deleterious mutations. The results suggest that, if the population is sexual, then the deleterious mutations will have little effect on the ultimate fate of the beneficial mutation. However, if most offspring are produced asexually, then the probability that the beneficial mutation will be lost from the population may be greatly enhanced by the deleterious mutations. Thus, sexual populations may adapt much more quickly than populations where most reproduction is asexual. Some of the results were produced using computer simulation methods, and a technique was developed that allows treatment of arbitrarily large numbers of individuals in a reasonable amount of computer time. This technique may be of prove useful for the analysis of a wide variety of models, though there are some constraints on its applicability. For example, the technique requires that reproduction can be described by Poisson processes. PMID:8070669
AESOP: A Python Library for Investigating Electrostatics in Protein Interactions.
Harrison, Reed E S; Mohan, Rohith R; Gorham, Ronald D; Kieslich, Chris A; Morikis, Dimitrios
2017-05-09
Electric fields often play a role in guiding the association of protein complexes. Such interactions can be further engineered to accelerate complex association, resulting in protein systems with increased productivity. This is especially true for enzymes where reaction rates are typically diffusion limited. To facilitate quantitative comparisons of electrostatics in protein families and to describe electrostatic contributions of individual amino acids, we previously developed a computational framework called AESOP. We now implement this computational tool in Python with increased usability and the capability of performing calculations in parallel. AESOP utilizes PDB2PQR and Adaptive Poisson-Boltzmann Solver to generate grid-based electrostatic potential files for protein structures provided by the end user. There are methods within AESOP for quantitatively comparing sets of grid-based electrostatic potentials in terms of similarity or generating ensembles of electrostatic potential files for a library of mutants to quantify the effects of perturbations in protein structure and protein-protein association. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Faster PET reconstruction with a stochastic primal-dual hybrid gradient method
NASA Astrophysics Data System (ADS)
Ehrhardt, Matthias J.; Markiewicz, Pawel; Chambolle, Antonin; Richtárik, Peter; Schott, Jonathan; Schönlieb, Carola-Bibiane
2017-08-01
Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.
NASA Astrophysics Data System (ADS)
Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.
2014-06-01
This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.
Two-dimensional computer simulation of EMVJ and grating solar cells under AMO illumination
NASA Technical Reports Server (NTRS)
Gray, J. L.; Schwartz, R. J.
1984-01-01
A computer program, SCAP2D (Solar Cell Analysis Program in 2-Dimensions), is used to evaluate the Etched Multiple Vertical Junction (EMVJ) and grating solar cells. The aim is to demonstrate how SCAP2D can be used to evaluate cell designs. The cell designs studied are by no means optimal designs. The SCAP2D program solves the three coupled, nonlinear partial differential equations, Poisson's Equation and the hole and electron continuity equations, simultaneously in two-dimensions using finite differences to discretize the equations and Newton's Method to linearize them. The variables solved for are the electrostatic potential and the hole and electron concentrations. Each linear system of equations is solved directly by Gaussian Elimination. Convergence of the Newton Iteration is assumed when the largest correction to the electrostatic potential or hole or electron quasi-potential is less than some predetermined error. A typical problem involves 2000 nodes with a Jacobi matrix of order 6000 and a bandwidth of 243.
Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST.
Xu, X Q
2008-07-01
We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (psi,theta,micro) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.
Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST
NASA Astrophysics Data System (ADS)
Xu, X. Q.
2008-07-01
We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (ψ,θ,γ,μ) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Ma, Jun; Yang, Gang; Du, Bo; Zhang, Liangpei
2017-06-01
A new Bayesian method named Poisson Nonnegative Matrix Factorization with Parameter Subspace Clustering Constraint (PNMF-PSCC) has been presented to extract endmembers from Hyperspectral Imagery (HSI). First, the method integrates the liner spectral mixture model with the Bayesian framework and it formulates endmember extraction into a Bayesian inference problem. Second, the Parameter Subspace Clustering Constraint (PSCC) is incorporated into the statistical program to consider the clustering of all pixels in the parameter subspace. The PSCC could enlarge differences among ground objects and helps finding endmembers with smaller spectrum divergences. Meanwhile, the PNMF-PSCC method utilizes the Poisson distribution as the prior knowledge of spectral signals to better explain the quantum nature of light in imaging spectrometer. Third, the optimization problem of PNMF-PSCC is formulated into maximizing the joint density via the Maximum A Posterior (MAP) estimator. The program is finally solved by iteratively optimizing two sub-problems via the Alternating Direction Method of Multipliers (ADMM) framework and the FURTHESTSUM initialization scheme. Five state-of-the art methods are implemented to make comparisons with the performance of PNMF-PSCC on both the synthetic and real HSI datasets. Experimental results show that the PNMF-PSCC outperforms all the five methods in Spectral Angle Distance (SAD) and Root-Mean-Square-Error (RMSE), and especially it could identify good endmembers for ground objects with smaller spectrum divergences.
NASA Astrophysics Data System (ADS)
Lu, Tiao; Cai, Wei
2008-10-01
In this paper, we propose a high order Fourier spectral-discontinuous Galerkin method for time-dependent Schrödinger-Poisson equations in 3-D spaces. The Fourier spectral Galerkin method is used for the two periodic transverse directions and a high order discontinuous Galerkin method for the longitudinal propagation direction. Such a combination results in a diagonal form for the differential operators along the transverse directions and a flexible method to handle the discontinuous potentials present in quantum heterojunction and supperlattice structures. As the derivative matrices are required for various time integration schemes such as the exponential time differencing and Crank Nicholson methods, explicit derivative matrices of the discontinuous Galerkin method of various orders are derived. Numerical results, using the proposed method with various time integration schemes, are provided to validate the method.
Normal forms for Poisson maps and symplectic groupoids around Poisson transversals
NASA Astrophysics Data System (ADS)
Frejlich, Pedro; Mărcuț, Ioan
2018-03-01
Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.
Normal forms for Poisson maps and symplectic groupoids around Poisson transversals.
Frejlich, Pedro; Mărcuț, Ioan
2018-01-01
Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.
Horno, J; González-Caballero, F; González-Fernández, C F
1990-01-01
Simple techniques of network thermodynamics are used to obtain the numerical solution of the Nernst-Planck and Poisson equation system. A network model for a particular physical situation, namely ionic transport through a thin membrane with simultaneous diffusion, convection and electric current, is proposed. Concentration and electric field profiles across the membrane, as well as diffusion potential, have been simulated using the electric circuit simulation program, SPICE. The method is quite general and extremely efficient, permitting treatments of multi-ion systems whatever the boundary and experimental conditions may be.
Evaluation of lattice sums by the Poisson sum formula
NASA Technical Reports Server (NTRS)
Ray, R. D.
1975-01-01
The Poisson sum formula was applied to the problem of summing pairwise interactions between an observer molecule and a semi-infinite regular array of solid state molecules. The transformed sum is often much more rapidly convergent than the original sum, and forms a Fourier series in the solid surface coordinates. The method is applicable to a variety of solid state structures and functional forms of the pairwise potential. As an illustration of the method, the electric field above the (100) face of the CsCl structure is calculated and compared to earlier results obtained by direct summation.
CFD-ACE+: a CAD system for simulation and modeling of MEMS
NASA Astrophysics Data System (ADS)
Stout, Phillip J.; Yang, H. Q.; Dionne, Paul; Leonard, Andy; Tan, Zhiqiang; Przekwas, Andrzej J.; Krishnan, Anantha
1999-03-01
Computer aided design (CAD) systems are a key to designing and manufacturing MEMS with higher performance/reliability, reduced costs, shorter prototyping cycles and improved time- to-market. One such system is CFD-ACE+MEMS, a modeling and simulation environment for MEMS which includes grid generation, data visualization, graphical problem setup, and coupled fluidic, thermal, mechanical, electrostatic, and magnetic physical models. The fluid model is a 3D multi- block, structured/unstructured/hybrid, pressure-based, implicit Navier-Stokes code with capabilities for multi- component diffusion, multi-species transport, multi-step gas phase chemical reactions, surface reactions, and multi-media conjugate heat transfer. The thermal model solves the total enthalpy from of the energy equation. The energy equation includes unsteady, convective, conductive, species energy, viscous dissipation, work, and radiation terms. The electrostatic model solves Poisson's equation. Both the finite volume method and the boundary element method (BEM) are available for solving Poisson's equation. The BEM method is useful for unbounded problems. The magnetic model solves for the vector magnetic potential from Maxwell's equations including eddy currents but neglecting displacement currents. The mechanical model is a finite element stress/deformation solver which has been coupled to the flow, heat, electrostatic, and magnetic calculations to study flow, thermal electrostatically, and magnetically included deformations of structures. The mechanical or structural model can accommodate elastic and plastic materials, can handle large non-linear displacements, and can model isotropic and anisotropic materials. The thermal- mechanical coupling involves the solution of the steady state Navier equation with thermoelastic deformation. The electrostatic-mechanical coupling is a calculation of the pressure force due to surface charge on the mechanical structure. Results of CFD-ACE+MEMS modeling of MEMS such as cantilever beams, accelerometers, and comb drives are discussed.
MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations
NASA Astrophysics Data System (ADS)
Vergara-Perez, Sandra; Marucho, Marcelo
2016-01-01
One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson-Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post-analysis of structural and electrical properties of biomolecules.
MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations
Vergara-Perez, Sandra; Marucho, Marcelo
2015-01-01
One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules. PMID:26924848
MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations.
Vergara-Perez, Sandra; Marucho, Marcelo
2016-01-01
One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules.
Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.
2013-01-01
Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689
Grimby-Ekman, Anna; Andersson, Eva M; Hagberg, Mats
2009-01-01
Background In the literature there are discussions on the choice of outcome and the need for more longitudinal studies of musculoskeletal disorders. The general aim of this longitudinal study was to analyze musculoskeletal neck pain, in a group of young adults. Specific aims were to determine whether psychosocial factors, computer use, high work/study demands, and lifestyle are long-term or short-term factors for musculoskeletal neck pain, and whether these factors are important for developing or ongoing musculoskeletal neck pain. Methods Three regression models were used to analyze the different outcomes. Pain at present was analyzed with a marginal logistic model, for number of years with pain a Poisson regression model was used and for developing and ongoing pain a logistic model was used. Presented results are odds ratios and proportion ratios (logistic models) and rate ratios (Poisson model). The material consisted of web-based questionnaires answered by 1204 Swedish university students from a prospective cohort recruited in 2002. Results Perceived stress was a risk factor for pain at present (PR = 1.6), for developing pain (PR = 1.7) and for number of years with pain (RR = 1.3). High work/study demands was associated with pain at present (PR = 1.6); and with number of years with pain when the demands negatively affect home life (RR = 1.3). Computer use pattern (number of times/week with a computer session ≥ 4 h, without break) was a risk factor for developing pain (PR = 1.7), but also associated with pain at present (PR = 1.4) and number of years with pain (RR = 1.2). Among life style factors smoking (PR = 1.8) was found to be associated to pain at present. The difference between men and women in prevalence of musculoskeletal pain was confirmed in this study. It was smallest for the outcome ongoing pain (PR = 1.4) compared to pain at present (PR = 2.4) and developing pain (PR = 2.5). Conclusion By using different regression models different aspects of neck pain pattern could be addressed and the risk factors impact on pain pattern was identified. Short-term risk factors were perceived stress, high work/study demands and computer use pattern (break pattern). Those were also long-term risk factors. For developing pain perceived stress and computer use pattern were risk factors. PMID:19545386
Determining X-ray source intensity and confidence bounds in crowded fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu
We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less
3D streamers simulation in a pin to plane configuration using massively parallel computing
NASA Astrophysics Data System (ADS)
Plewa, J.-M.; Eichwald, O.; Ducasse, O.; Dessante, P.; Jacobs, C.; Renon, N.; Yousfi, M.
2018-03-01
This paper concerns the 3D simulation of corona discharge using high performance computing (HPC) managed with the message passing interface (MPI) library. In the field of finite volume methods applied on non-adaptive mesh grids and in the case of a specific 3D dynamic benchmark test devoted to streamer studies, the great efficiency of the iterative R&B SOR and BiCGSTAB methods versus the direct MUMPS method was clearly demonstrated in solving the Poisson equation using HPC resources. The optimization of the parallelization and the resulting scalability was undertaken as a function of the HPC architecture for a number of mesh cells ranging from 8 to 512 million and a number of cores ranging from 20 to 1600. The R&B SOR method remains at least about four times faster than the BiCGSTAB method and requires significantly less memory for all tested situations. The R&B SOR method was then implemented in a 3D MPI parallelized code that solves the classical first order model of an atmospheric pressure corona discharge in air. The 3D code capabilities were tested by following the development of one, two and four coplanar streamers generated by initial plasma spots for 6 ns. The preliminary results obtained allowed us to follow in detail the formation of the tree structure of a corona discharge and the effects of the mutual interactions between the streamers in terms of streamer velocity, trajectory and diameter. The computing time for 64 million of mesh cells distributed over 1000 cores using the MPI procedures is about 30 min ns-1, regardless of the number of streamers.
Huang, Qiang; Herrmann, Andreas
2012-03-01
Protein folding, stability, and function are usually influenced by pH. And free energy plays a fundamental role in analysis of such pH-dependent properties. Electrostatics-based theoretical framework using dielectric solvent continuum model and solving Poisson-Boltzmann equation numerically has been shown to be very successful in understanding the pH-dependent properties. However, in this approach the exact computation of pH-dependent free energy becomes impractical for proteins possessing more than several tens of ionizable sites (e.g. > 30), because exact evaluation of the partition function requires a summation over a vast number of possible protonation microstates. Here we present a method which computes the free energy using the average energy and the protonation probabilities of ionizable sites obtained by the well-established Monte Carlo sampling procedure. The key feature is to calculate the entropy by using the protonation probabilities. We used this method to examine a well-studied protein (lysozyme) and produced results which agree very well with the exact calculations. Applications to the optimum pH of maximal stability of proteins and protein-DNA interactions have also resulted in good agreement with experimental data. These examples recommend our method for application to the elucidation of the pH-dependent properties of proteins.
NASA Astrophysics Data System (ADS)
Pathak, Ashish; Raessi, Mehdi
2016-04-01
We present a three-dimensional (3D) and fully Eulerian approach to capturing the interaction between two fluids and moving rigid structures by using the fictitious domain and volume-of-fluid (VOF) methods. The solid bodies can have arbitrarily complex geometry and can pierce the fluid-fluid interface, forming contact lines. The three-phase interfaces are resolved and reconstructed by using a VOF-based methodology. Then, a consistent scheme is employed for transporting mass and momentum, allowing for simulations of three-phase flows of large density ratios. The Eulerian approach significantly simplifies numerical resolution of the kinematics of rigid bodies of complex geometry and with six degrees of freedom. The fluid-structure interaction (FSI) is computed using the fictitious domain method. The methodology was developed in a message passing interface (MPI) parallel framework accelerated with graphics processing units (GPUs). The computationally intensive solution of the pressure Poisson equation is ported to GPUs, while the remaining calculations are performed on CPUs. The performance and accuracy of the methodology are assessed using an array of test cases, focusing individually on the flow solver and the FSI in surface-piercing configurations. Finally, an application of the proposed methodology in simulations of the ocean wave energy converters is presented.
HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE
NASA Technical Reports Server (NTRS)
De, Salvo L. J.
1994-01-01
HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.
Kerry, Ruth; Goovaerts, Pierre; Smit, Izak P.J.; Ingram, Ben R.
2015-01-01
Kruger National Park (KNP), South Africa, provides protected habitats for the unique animals of the African savannah. For the past 40 years, annual aerial surveys of herbivores have been conducted to aid management decisions based on (1) the spatial distribution of species throughout the park and (2) total species populations in a year. The surveys are extremely time consuming and costly. For many years, the whole park was surveyed, but in 1998 a transect survey approach was adopted. This is cheaper and less time consuming but leaves gaps in the data spatially. Also the distance method currently employed by the park only gives estimates of total species populations but not their spatial distribution. We compare the ability of multiple indicator kriging and area-to-point Poisson kriging to accurately map species distribution in the park. A leave-one-out cross-validation approach indicates that multiple indicator kriging makes poor estimates of the number of animals, particularly the few large counts, as the indicator variograms for such high thresholds are pure nugget. Poisson kriging was applied to the prediction of two types of abundance data: spatial density and proportion of a given species. Both Poisson approaches had standardized mean absolute errors (St. MAEs) of animal counts at least an order of magnitude lower than multiple indicator kriging. The spatial density, Poisson approach (1), gave the lowest St. MAEs for the most abundant species and the proportion, Poisson approach (2), did for the least abundant species. Incorporating environmental data into Poisson approach (2) further reduced St. MAEs. PMID:25729318
Sepúlveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G
2013-02-26
The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data.
Kerry, Ruth; Goovaerts, Pierre; Smit, Izak P J; Ingram, Ben R
Kruger National Park (KNP), South Africa, provides protected habitats for the unique animals of the African savannah. For the past 40 years, annual aerial surveys of herbivores have been conducted to aid management decisions based on (1) the spatial distribution of species throughout the park and (2) total species populations in a year. The surveys are extremely time consuming and costly. For many years, the whole park was surveyed, but in 1998 a transect survey approach was adopted. This is cheaper and less time consuming but leaves gaps in the data spatially. Also the distance method currently employed by the park only gives estimates of total species populations but not their spatial distribution. We compare the ability of multiple indicator kriging and area-to-point Poisson kriging to accurately map species distribution in the park. A leave-one-out cross-validation approach indicates that multiple indicator kriging makes poor estimates of the number of animals, particularly the few large counts, as the indicator variograms for such high thresholds are pure nugget. Poisson kriging was applied to the prediction of two types of abundance data: spatial density and proportion of a given species. Both Poisson approaches had standardized mean absolute errors (St. MAEs) of animal counts at least an order of magnitude lower than multiple indicator kriging. The spatial density, Poisson approach (1), gave the lowest St. MAEs for the most abundant species and the proportion, Poisson approach (2), did for the least abundant species. Incorporating environmental data into Poisson approach (2) further reduced St. MAEs.
A class of renormalised meshless Laplacians for boundary value problems
NASA Astrophysics Data System (ADS)
Basic, Josip; Degiuli, Nastia; Ban, Dario
2018-02-01
A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.
Parallel Computing of Upwelling in a Rotating Stratified Flow
NASA Astrophysics Data System (ADS)
Cui, A.; Street, R. L.
1997-11-01
A code for the three-dimensional, unsteady, incompressible, and turbulent flow has been implemented on the IBM SP2, using message passing. The effects of rotation and variable density are included. A finite volume method is used to discretize the Navier-Stokes equations in general curvilinear coordinates on a non-staggered grid. All the spatial derivatives are approximated using second-order central differences with the exception of the convection terms, which are handled with special upwind-difference schemes. The semi-implicit, second-order accurate, time-advancement scheme employs the Adams-Bashforth method for the explicit terms and Crank-Nicolson for the implicit terms. A multigrid method, with the four-color ZEBRA as smoother, is used to solve the Poisson equation for pressure, while the momentum equations are solved with an approximate factorization technique. The code was successfully validated for a variety test cases. Simulations of a laboratory model of coastal upwelling in a rotating annulus are in progress and will be presented.
Poissonian renormalizations, exponentials, and power laws.
Eliazar, Iddo
2013-05-01
This paper presents a comprehensive "renormalization study" of Poisson processes governed by exponential and power-law intensities. These Poisson processes are of fundamental importance, as they constitute the very bedrock of the universal extreme-value laws of Gumbel, Fréchet, and Weibull. Applying the method of Poissonian renormalization we analyze the emergence of these Poisson processes, unveil their intrinsic dynamical structures, determine their domains of attraction, and characterize their structural phase transitions. These structural phase transitions are shown to be governed by uniform and harmonic intensities, to have universal domains of attraction, to uniquely display intrinsic invariance, and to be intimately connected to "white noise" and to "1/f noise." Thus, we establish a Poissonian explanation to the omnipresence of white and 1/f noises.
Poisson sigma models, reduction and nonlinear gauge theories
NASA Astrophysics Data System (ADS)
Signori, Daniele
This dissertation comprises two main lines of research. Firstly, we study non-linear gauge theories for principal bundles, where the structure group is replaced by a Lie groupoid. We follow the approach of Moerdijk-Mrcun and establish its relation with the existing physics literature. In particular, we derive a new formula for the gauge transformation which closely resembles and generalizes the classical formulas found in Yang Mills gauge theories. Secondly, we give a field theoretic interpretation of the of the BRST (Becchi-Rouet-Stora-Tyutin) and BFV (Batalin-Fradkin-Vilkovisky) methods for the reduction of coisotropic submanifolds of Poisson manifolds. The generalized Poisson sigma models that we define are related to the quantization deformation problems of coisotropic submanifolds using homotopical algebras.
NASA Astrophysics Data System (ADS)
Cooper, Christopher D.; Barba, Lorena A.
2016-05-01
Interactions between surfaces and proteins occur in many vital processes and are crucial in biotechnology: the ability to control specific interactions is essential in fields like biomaterials, biomedical implants and biosensors. In the latter case, biosensor sensitivity hinges on ligand proteins adsorbing on bioactive surfaces with a favorable orientation, exposing reaction sites to target molecules. Protein adsorption, being a free-energy-driven process, is difficult to study experimentally. This paper develops and evaluates a computational model to study electrostatic interactions of proteins and charged nanosurfaces, via the Poisson-Boltzmann equation. We extended the implicit-solvent model used in the open-source code PyGBe to include surfaces of imposed charge or potential. This code solves the boundary integral formulation of the Poisson-Boltzmann equation, discretized with surface elements. PyGBe has at its core a treecode-accelerated Krylov iterative solver, resulting in O(N log N) scaling, with further acceleration on hardware via multi-threaded execution on GPUs. It computes solvation and surface free energies, providing a framework for studying the effect of electrostatics on adsorption. We derived an analytical solution for a spherical charged surface interacting with a spherical dielectric cavity, and used it in a grid-convergence study to build evidence on the correctness of our approach. The study showed the error decaying with the average area of the boundary elements, i.e., the method is O(1 / N) , which is consistent with our previous verification studies using PyGBe. We also studied grid-convergence using a real molecular geometry (protein G B1 D4‧), in this case using Richardson extrapolation (in the absence of an analytical solution) and confirmed the O(1 / N) scaling. With this work, we can now access a completely new family of problems, which no other major bioelectrostatics solver, e.g. APBS, is capable of dealing with. PyGBe is open-source under an MIT license and is hosted under version control at https://github.com/barbagroup/pygbe. To supplement this paper, we prepared ;reproducibility packages; consisting of running and post-processing scripts in Python for replicating the grid-convergence studies, all the way to generating the final plots, with a single command.
Fast and Accurate Approximation to Significance Tests in Genome-Wide Association Studies
Zhang, Yu; Liu, Jun S.
2011-01-01
Genome-wide association studies commonly involve simultaneous tests of millions of single nucleotide polymorphisms (SNP) for disease association. The SNPs in nearby genomic regions, however, are often highly correlated due to linkage disequilibrium (LD, a genetic term for correlation). Simple Bonferonni correction for multiple comparisons is therefore too conservative. Permutation tests, which are often employed in practice, are both computationally expensive for genome-wide studies and limited in their scopes. We present an accurate and computationally efficient method, based on Poisson de-clumping heuristics, for approximating genome-wide significance of SNP associations. Compared with permutation tests and other multiple comparison adjustment approaches, our method computes the most accurate and robust p-value adjustments for millions of correlated comparisons within seconds. We demonstrate analytically that the accuracy and the efficiency of our method are nearly independent of the sample size, the number of SNPs, and the scale of p-values to be adjusted. In addition, our method can be easily adopted to estimate false discovery rate. When applied to genome-wide SNP datasets, we observed highly variable p-value adjustment results evaluated from different genomic regions. The variation in adjustments along the genome, however, are well conserved between the European and the African populations. The p-value adjustments are significantly correlated with LD among SNPs, recombination rates, and SNP densities. Given the large variability of sequence features in the genome, we further discuss a novel approach of using SNP-specific (local) thresholds to detect genome-wide significant associations. This article has supplementary material online. PMID:22140288
Le Bihan, Nicolas; Margerin, Ludovic
2009-07-01
In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.
Integración automatizada de las ecuaciones de Lagrange en el movimiento orbital.
NASA Astrophysics Data System (ADS)
Abad, A.; San Juan, J. F.
The new techniques of algebraic manipulation, especially the Poisson Series Processor, permit the analytical integration of the more and more complex problems of celestial mechanics. The authors are developing a new Poisson Series Processor, PSPC, and they use it to solve the Lagrange equation of the orbital motion. They integrate the Lagrange equation by using the stroboscopic method, and apply it to the main problem of the artificial satellite theory.
The Electric Potential of a Macromolecule in a Solvent: A Fundamental Approach
NASA Astrophysics Data System (ADS)
Juffer, André H.; Botta, Eugen F. F.; van Keulen, Bert A. M.; van der Ploeg, Auke; Berendsen, Herman J. C.
1991-11-01
A general numerical method is presented to compute the electric potential for a macromolecule of arbitrary shape in a solvent with nonzero ionic strength. The model is based on a continuum description of the dielectric and screening properties of the system, which consists of a bounded internal region with discrete charges and an infinite external region. The potential obeys the Poisson equation in the internal region and the linearized Poisson-Boltzmann equation in the external region, coupled through appropriate boundary conditions. It is shown how this three-dimensional problem can be presented as a pair of coupled integral equations for the potential and the normal component of the electric field at the dielectric interface. These equations can be solved by a straightforward application of boundary element techniques. The solution involves the decomposition of a matrix that depends only on the geometry of the surface and not on the positions of the charges. With this approach the number of unknowns is reduced by an order of magnitude with respect to the usual finite difference methods. Special attention is given to the numerical inaccuracies resulting from charges which are located close to the interface; an adapted formulation is given for that case. The method is tested both for a spherical geometry, for which an exact solution is available, and for a realistic problem, for which a finite difference solution and experimental verification is available. The latter concerns the shift in acid strength (pK-values) of histidines in the copper-containing protein azurin on oxidation of the copper, for various values of the ionic strength. A general method is given to triangulate a macromolecular surface. The possibility is discussed to use the method presented here for a correct treatment of long-range electrostatic interactions in simulations of solvated macromolecules, which form an essential part of correct potentials of mean force.
2017-01-01
Introduction Our study looked at out-of-hospital sudden cardiac arrest events in the City of Toronto. These are relatively rare events, yet present a serious global clinical and public health problem. We report on the application of spatial methods and tools that, although relatively well known to geographers and natural resource scientists, need to become better known and used more frequently by health care researchers. Materials and methods Our data came from the population-based Rescu Epistry cardiac arrest database. We limited it to the residents of the City of Toronto who experienced sudden arrest in 2010. The data was aggregated at the Dissemination Area level, and population rates were calculated. Poisson kriging was carried out on one year of data using three different spatial weights. Kriging estimates were then compared in Hot Spot analyses. Results Spatial analysis revealed that Poisson kriging can yield reliable rates using limited data of high quality. We observed the highest rates of sudden arrests in the north and central parts of Etobicoke, western parts of North York as well as the central and southwestern parts of Scarborough while the lowest rates were found in north and eastern parts of Scarborough, downtown Toronto, and East York as well as east central parts of North York. Influence of spatial neighbours on the results did not extend past two rings of adjacent units. Conclusions Poisson kriging has the potential to be applied to a wide range of healthcare research, particularly on rare events. This approach can be successfully combined with other spatial methods. More applied research, is needed to establish a wider acceptance for this method, especially among healthcare researchers and epidemiologists. PMID:28672029
Variational Gaussian approximation for Poisson data
NASA Astrophysics Data System (ADS)
Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen
2018-02-01
The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.
Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap
2016-01-01
The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive.
Neelon, Brian; Chang, Howard H; Ling, Qiang; Hastings, Nicole S
2016-12-01
Motivated by a study exploring spatiotemporal trends in emergency department use, we develop a class of two-part hurdle models for the analysis of zero-inflated areal count data. The models consist of two components-one for the probability of any emergency department use and one for the number of emergency department visits given use. Through a hierarchical structure, the models incorporate both patient- and region-level predictors, as well as spatially and temporally correlated random effects for each model component. The random effects are assigned multivariate conditionally autoregressive priors, which induce dependence between the components and provide spatial and temporal smoothing across adjacent spatial units and time periods, resulting in improved inferences. To accommodate potential overdispersion, we consider a range of parametric specifications for the positive counts, including truncated negative binomial and generalized Poisson distributions. We adopt a Bayesian inferential approach, and posterior computation is handled conveniently within standard Bayesian software. Our results indicate that the negative binomial and generalized Poisson hurdle models vastly outperform the Poisson hurdle model, demonstrating that overdispersed hurdle models provide a useful approach to analyzing zero-inflated spatiotemporal data. © The Author(s) 2014.
Yu, Pei; Li, Zi-Yuan; Xu, Hong-Ya; Huang, Liang; Dietz, Barbara; Grebogi, Celso; Lai, Ying-Cheng
2016-12-01
A crucial result in quantum chaos, which has been established for a long time, is that the spectral properties of classically integrable systems generically are described by Poisson statistics, whereas those of time-reversal symmetric, classically chaotic systems coincide with those of random matrices from the Gaussian orthogonal ensemble (GOE). Does this result hold for two-dimensional Dirac material systems? To address this fundamental question, we investigate the spectral properties in a representative class of graphene billiards with shapes of classically integrable circular-sector billiards. Naively one may expect to observe Poisson statistics, which is indeed true for energies close to the band edges where the quasiparticle obeys the Schrödinger equation. However, for energies near the Dirac point, where the quasiparticles behave like massless Dirac fermions, Poisson statistics is extremely rare in the sense that it emerges only under quite strict symmetry constraints on the straight boundary parts of the sector. An arbitrarily small amount of imperfection of the boundary results in GOE statistics. This implies that, for circular-sector confinements with arbitrary angle, the spectral properties will generically be GOE. These results are corroborated by extensive numerical computation. Furthermore, we provide a physical understanding for our results.
NASA Astrophysics Data System (ADS)
Yu, Pei; Li, Zi-Yuan; Xu, Hong-Ya; Huang, Liang; Dietz, Barbara; Grebogi, Celso; Lai, Ying-Cheng
2016-12-01
A crucial result in quantum chaos, which has been established for a long time, is that the spectral properties of classically integrable systems generically are described by Poisson statistics, whereas those of time-reversal symmetric, classically chaotic systems coincide with those of random matrices from the Gaussian orthogonal ensemble (GOE). Does this result hold for two-dimensional Dirac material systems? To address this fundamental question, we investigate the spectral properties in a representative class of graphene billiards with shapes of classically integrable circular-sector billiards. Naively one may expect to observe Poisson statistics, which is indeed true for energies close to the band edges where the quasiparticle obeys the Schrödinger equation. However, for energies near the Dirac point, where the quasiparticles behave like massless Dirac fermions, Poisson statistics is extremely rare in the sense that it emerges only under quite strict symmetry constraints on the straight boundary parts of the sector. An arbitrarily small amount of imperfection of the boundary results in GOE statistics. This implies that, for circular-sector confinements with arbitrary angle, the spectral properties will generically be GOE. These results are corroborated by extensive numerical computation. Furthermore, we provide a physical understanding for our results.
NASA Astrophysics Data System (ADS)
Salary, Mohammad Mahdi; Mosallaei, Hossein
2015-06-01
Interactions between the plasmons of noble metal nanoparticles and non-absorbing biomolecules forms the basis of the plasmonic sensors, which have received much attention. Studying these interactions can help to exploit the full potentials of plasmonic sensors in quantification and analysis of biomolecules. Here, a quasi-static continuum model is adopted for this purpose. We present a boundary-element method for computing the optical response of plasmonic particles to the molecular binding events by solving the Poisson equation. The model represents biomolecules with their molecular surfaces, thus accurately accounting for the influence of exact binding conformations as well as structural differences between different proteins on the response of plasmonic nanoparticles. The linear systems arising in the method are solved iteratively with Krylov generalized minimum residual algorithm, and the acceleration is achieved by applying precorrected-Fast Fourier Transformation technique. We apply the developed method to investigate interactions of biotinylated gold nanoparticles (nanosphere and nanorod) with four different types of biotin-binding proteins. The interactions are studied at both ensemble and single-molecule level. Computational results demonstrate the ability of presented model for analyzing realistic nanoparticle-biomolecule configurations. The method can provide comprehensive study for wide variety of applications, including protein structures, monitoring structural and conformational transitions, and quantification of protein concentrations. In addition, it is suitable for design and optimization of the nano-plasmonic sensors.
Elastic thickness determination based on Vening Meinesz-Moritz and flexural theories of isostasy
NASA Astrophysics Data System (ADS)
Eshagh, Mehdi
2018-06-01
Elastic thickness (Te) is one of mechanical properties of the Earth's lithosphere. The lithosphere is assumed to be a thin elastic shell, which is bended under the topographic, bathymetric and sediment loads on. The flexure of this elastic shell depends on its thickness or Te. Those shells having larger Te flex less. In this paper, a forward computational method is presented based on the Vening Meinesz-Moritz (VMM) and flexural theories of isostasy. Two Moho flexure models are determined using these theories, considering effects of surface and subsurface loads. Different values are selected for Te in the flexural method to see by which one, the closest Moho flexure to that of the VMM is achieved. The effects of topographic/bathymetric, sediments and crustal crystalline masses, and laterally variable upper mantle density, Young's modulus and Poisson's ratio are considered in whole computational process. Our mathematical derivations are based on spherical harmonics, which can be used to estimate Te at any single point, meaning that there is no edge effect in the method. However, the Te map needs to be filtered to remove noise at some points. A median filter with a window size of 5° × 5° and overlap of 4° works well for this purpose. The method is applied to estimate Te over South America using the data of CRUST1.0 and a global gravity model.
2013-01-01
Background High-throughput RNA sequencing (RNA-seq) offers unprecedented power to capture the real dynamics of gene expression. Experimental designs with extensive biological replication present a unique opportunity to exploit this feature and distinguish expression profiles with higher resolution. RNA-seq data analysis methods so far have been mostly applied to data sets with few replicates and their default settings try to provide the best performance under this constraint. These methods are based on two well-known count data distributions: the Poisson and the negative binomial. The way to properly calibrate them with large RNA-seq data sets is not trivial for the non-expert bioinformatics user. Results Here we show that expression profiles produced by extensively-replicated RNA-seq experiments lead to a rich diversity of count data distributions beyond the Poisson and the negative binomial, such as Poisson-Inverse Gaussian or Pólya-Aeppli, which can be captured by a more general family of count data distributions called the Poisson-Tweedie. The flexibility of the Poisson-Tweedie family enables a direct fitting of emerging features of large expression profiles, such as heavy-tails or zero-inflation, without the need to alter a single configuration parameter. We provide a software package for R called tweeDEseq implementing a new test for differential expression based on the Poisson-Tweedie family. Using simulations on synthetic and real RNA-seq data we show that tweeDEseq yields P-values that are equally or more accurate than competing methods under different configuration parameters. By surveying the tiny fraction of sex-specific gene expression changes in human lymphoblastoid cell lines, we also show that tweeDEseq accurately detects differentially expressed genes in a real large RNA-seq data set with improved performance and reproducibility over the previously compared methodologies. Finally, we compared the results with those obtained from microarrays in order to check for reproducibility. Conclusions RNA-seq data with many replicates leads to a handful of count data distributions which can be accurately estimated with the statistical model illustrated in this paper. This method provides a better fit to the underlying biological variability; this may be critical when comparing groups of RNA-seq samples with markedly different count data distributions. The tweeDEseq package forms part of the Bioconductor project and it is available for download at http://www.bioconductor.org. PMID:23965047
3DGRAPE - THREE DIMENSIONAL GRIDS ABOUT ANYTHING BY POISSON'S EQUATION
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1994-01-01
The ability to treat arbitrary boundary shapes is one of the most desirable characteristics of a method for generating grids. 3DGRAPE is designed to make computational grids in or about almost any shape. These grids are generated by the solution of Poisson's differential equations in three dimensions. The program automatically finds its own values for inhomogeneous terms which give near-orthogonality and controlled grid cell height at boundaries. Grids generated by 3DGRAPE have been applied to both viscous and inviscid aerodynamic problems, and to problems in other fluid-dynamic areas. 3DGRAPE uses zones to solve the problem of warping one cube into the physical domain in real-world computational fluid dynamics problems. In a zonal approach, a physical domain is divided into regions, each of which maps into its own computational cube. It is believed that even the most complicated physical region can be divided into zones, and since it is possible to warp a cube into each zone, a grid generator which is oriented to zones and allows communication across zonal boundaries (where appropriate) solves the problem of topological complexity. 3DGRAPE expects to read in already-distributed x,y,z coordinates on the bodies of interest, coordinates which will remain fixed during the entire grid-generation process. The 3DGRAPE code makes no attempt to fit given body shapes and redistribute points thereon. Body-fitting is a formidable problem in itself. The user must either be working with some simple analytical body shape, upon which a simple analytical distribution can be easily effected, or must have available some sophisticated stand-alone body-fitting software. 3DGRAPE does not require the user to supply the block-to-block boundaries nor the shapes of the distribution of points. 3DGRAPE will typically supply those block-to-block boundaries simply as surfaces in the elliptic grid. Thus at block-to-block boundaries the following conditions are obtained: (1) grids lines will match up as they approach the block-to-block boundary from either side, (2) grid lines will cross the boundary with no slope discontinuity, (3) the spacing of points along the line piercing the boundary will be continuous, (4) the shape of the boundary will be consistent with the surrounding grid, and (5) the distribution of points on the boundary will be reasonable in view of the surrounding grid. 3DGRAPE offers a powerful building-block approach to complex 3-D grid generation, but is a low-level tool. Users may build each face of each block as they wish, from a wide variety of resources. 3DGRAPE uses point-successive-over-relaxation (point-SOR) to solve the Poisson equations. This method is slow, although it does vectorize nicely. Any number of sophisticated graphics programs may be used on the stored output file of 3DGRAPE though it lacks interactive graphics. Versatility was a prominent consideration in developing the code. The block structure allows a great latitude in the problems it can treat. As the acronym implies, this program should be able to handle just about any physical region into which a computational cube or cubes can be warped. 3DGRAPE was written in FORTRAN 77 and should be machine independent. It was originally developed on a Cray under COS and tested on a MicroVAX 3200 under VMS 5.1.
Generic Schemes for Single-Molecule Kinetics. 2: Information Content of the Poisson Indicator.
Avila, Thomas R; Piephoff, D Evan; Cao, Jianshu
2017-08-24
Recently, we described a pathway analysis technique (paper 1) for analyzing generic schemes for single-molecule kinetics based upon the first-passage time distribution. Here, we employ this method to derive expressions for the Poisson indicator, a normalized measure of stochastic variation (essentially equivalent to the Fano factor and Mandel's Q parameter), for various renewal (i.e., memoryless) enzymatic reactions. We examine its dependence on substrate concentration, without assuming all steps follow Poissonian kinetics. Based upon fitting to the functional forms of the first two waiting time moments, we show that, to second order, the non-Poissonian kinetics are generally underdetermined but can be specified in certain scenarios. For an enzymatic reaction with an arbitrary intermediate topology, we identify a generic minimum of the Poisson indicator as a function of substrate concentration, which can be used to tune substrate concentration to the stochastic fluctuations and to estimate the largest number of underlying consecutive links in a turnover cycle. We identify a local maximum of the Poisson indicator (with respect to substrate concentration) for a renewal process as a signature of competitive binding, either between a substrate and an inhibitor or between multiple substrates. Our analysis explores the rich connections between Poisson indicator measurements and microscopic kinetic mechanisms.
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Chawdhary, Saurabh; Sotiropoulos, Fotis
2016-11-01
A novel numerical method is developed for solving the 3D, unsteady, incompressible Navier-Stokes equations on locally refined fully unstructured Cartesian grids in domains with arbitrarily complex immersed boundaries. Owing to the utilization of the fractional step method on an unstructured Cartesian hybrid staggered/non-staggered grid layout, flux mismatch and pressure discontinuity issues are avoided and the divergence free constraint is inherently satisfied to machine zero. Auxiliary/hanging nodes are used to facilitate the discretization of the governing equations. The second-order accuracy of the solver is ensured by using multi-dimension Lagrange interpolation operators and appropriate differencing schemes at the interface of regions with different levels of refinement. The sharp interface immersed boundary method is augmented with local near-boundary refinement to handle arbitrarily complex boundaries. The discrete momentum equation is solved with the matrix free Newton-Krylov method and the Krylov-subspace method is employed to solve the Poisson equation. The second-order accuracy of the proposed method on unstructured Cartesian grids is demonstrated by solving the Poisson equation with a known analytical solution. A number of three-dimensional laminar flow simulations of increasing complexity illustrate the ability of the method to handle flows across a range of Reynolds numbers and flow regimes. Laminar steady and unsteady flows past a sphere and the oblique vortex shedding from a circular cylinder mounted between two end walls demonstrate the accuracy, the efficiency and the smooth transition of scales and coherent structures across refinement levels. Large-eddy simulation (LES) past a miniature wind turbine rotor, parameterized using the actuator line approach, indicates the ability of the fully unstructured solver to simulate complex turbulent flows. Finally, a geometry resolving LES of turbulent flow past a complete hydrokinetic turbine illustrates the potential of the method to simulate turbulent flows past geometrically complex bodies on locally refined meshes. In all the cases, the results are found to be in very good agreement with published data and savings in computational resources are achieved.
Design of high-perveance confined-flow guns for periodic-permanent-magnet-focused tubes
NASA Technical Reports Server (NTRS)
Stankiewicz, N.
1979-01-01
An approach to the design of high perveance, low compression guns is described in which confinement is used to stabilize the beam for subsequent periodic-permanent-magnet focusing. The computed results for two cases are presented. A magnetic boundary value problem was solved for the scalar potential from which the axial magnetic field was computed. A solution was found by iterating between Poisson's equation and the electron trajectory calculations. Magnetic field values were varied in magnitude until a laminar beam with minimum scalloping was produced.
Constructing Precisely Computing Networks with Biophysical Spiking Neurons.
Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T
2015-07-15
While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks including irregular, Poisson-like spike times, and a tight balance between excitation and inhibition. These results significantly increase the biological plausibility of the spike-based approach to network computation, and uncover how several components of biological networks may work together to efficiently carry out computation. Copyright © 2015 the authors 0270-6474/15/3510112-23$15.00/0.
Akay, Erdem; Yilmaz, Cagatay; Kocaman, Esat S; Turkmen, Halit S; Yildiz, Mehmet
2016-09-19
The significance of strain measurement is obvious for the analysis of Fiber-Reinforced Polymer (FRP) composites. Conventional strain measurement methods are sufficient for static testing in general. Nevertheless, if the requirements exceed the capabilities of these conventional methods, more sophisticated techniques are necessary to obtain strain data. Fiber Bragg Grating (FBG) sensors have many advantages for strain measurement over conventional ones. Thus, the present paper suggests a novel method for biaxial strain measurement using embedded FBG sensors during the fatigue testing of FRP composites. Poisson's ratio and its reduction were monitored for each cyclic loading by using embedded FBG sensors for a given specimen and correlated with the fatigue stages determined based on the variations of the applied fatigue loading and temperature due to the autogenous heating to predict an oncoming failure of the continuous fiber-reinforced epoxy matrix composite specimens under fatigue loading. The results show that FBG sensor technology has a remarkable potential for monitoring the evolution of Poisson's ratio on a cycle-by-cycle basis, which can reliably be used towards tracking the fatigue stages of composite for structural health monitoring purposes.
Multiparameter linear least-squares fitting to Poisson data one count at a time
NASA Technical Reports Server (NTRS)
Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.
1995-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 keV, HEAO 3) energy channels of a Ge spectrometer, where the expected number of counts obtained per scan may be very low. Such an analysis system is discussed and compared to the method previously used.
NASA Technical Reports Server (NTRS)
Hepner, T. E.; Meyers, J. F. (Inventor)
1985-01-01
A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.
Solution of elliptic PDEs by fast Poisson solvers using a local relaxation factor
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1986-01-01
A large class of two- and three-dimensional, nonseparable elliptic partial differential equations (PDEs) is presently solved by means of novel one-step (D'Yakanov-Gunn) and two-step (accelerated one-step) iterative procedures, using a local, discrete Fourier analysis. In addition to being easily implemented and applicable to a variety of boundary conditions, these procedures are found to be computationally efficient on the basis of the results of numerical comparison with other established methods, which lack the present one's: (1) insensitivity to grid cell size and aspect ratio, and (2) ease of convergence rate estimation by means of the coefficient of the PDE being solved. The two-step procedure is numerically demonstrated to outperform the one-step procedure in the case of PDEs with variable coefficients.
Confidence limits for Neyman type A-distributed events.
Morand, Josselin; Deperas-Standylo, Joanna; Urbanik, Witold; Moss, Raymond; Hachem, Sabet; Sauerwein, Wolfgang; Wojcik, Andrzej
2008-01-01
The Neyman type A distribution, a generalised, 'contagious' Poisson distribution, finds application in a number of disciplines such as biology, physics and economy. In radiation biology, it best describes the distribution of chromosomal aberrations in cells that were exposed to neutrons, alpha radiations or heavy ions. Intriguingly, no method has been developed for the calculation of confidence limits (CLs) of Neyman type A-distributed events. Here, an algorithm to calculate the 95% CL of Neyman type A-distributed events is presented. Although it has been developed in response to the requirements of radiation biology, it can find application in other fields of research. The algorithm has been implemented in a PC-based computer program that can be downloaded, free of charge, from www.pu.kielce.pl/ibiol/neta.
Influence of nonelectrostatic ion-ion interactions on double-layer capacitance
NASA Astrophysics Data System (ADS)
Zhao, Hui
2012-11-01
Recently a Poisson-Helmholtz-Boltzmann (PHB) model [Bohinc , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.85.031130 85, 031130 (2012)] was developed by accounting for solvent-mediated nonelectrostatic ion-ion interactions. Nonelectrostatic interactions are described by a Yukawa-like pair potential. In the present work, we modify the PHB model by adding steric effects (finite ion size) into the free energy to derive governing equations. The modified PHB model is capable of capturing both ion specificity and ion crowding. This modified model is then employed to study the capacitance of the double layer. More specifically, we focus on the influence of nonelectrostatic ion-ion interactions on charging a double layer near a flat surface in the presence of steric effects. We numerically compute the differential capacitance as a function of the voltage under various conditions. At small voltages and low salt concentrations (dilute solution), we find out that the predictions from the modified PHB model are the same as those from the classical Poisson-Boltzmann theory, indicating that nonelectrostatic ion-ion interactions and steric effects are negligible. At moderate voltages, nonelectrostatic ion-ion interactions play an important role in determining the differential capacitance. Generally speaking, nonelectrostatic interactions decrease the capacitance because of additional nonelectrostatic repulsion among excess counterions inside the double layer. However, increasing the voltage gradually favors steric effects, which induce a condensed layer with crowding of counterions near the electrode. Accordingly, the predictions from the modified PHB model collapse onto those computed by the modified Poisson-Boltzmann theory considering steric effects alone. Finally, theoretical predictions are compared and favorably agree with experimental data, in particular, in concentrated solutions, leading one to conclude that the modified PHB model adequately predicts the diffuse-charge dynamics of the double layer with ion specificity and steric effects.
Xu, Jingjie; Xie, Yan; Lu, Benzhuo; Zhang, Linbo
2016-08-25
The Debye-Hückel limiting law is used to study the binding kinetics of substrate-enzyme system as well as to estimate the reaction rate of a electrostatically steered diffusion-controlled reaction process. It is based on a linearized Poisson-Boltzmann model and known for its accurate predictions in dilute solutions. However, the substrate and product particles are in nonequilibrium states and are possibly charged, and their contributions to the total electrostatic field cannot be explicitly studied in the Poisson-Boltzmann model. Hence the influences of substrate and product on reaction rate coefficient were not known. In this work, we consider all the charged species, including the charged substrate, product, and mobile salt ions in a Poisson-Nernst-Planck model, and then compare the results with previous work. The results indicate that both the charged substrate and product can significantly influence the reaction rate coefficient with different behaviors under different setups of computational conditions. It is interesting to find that when substrate and product are both considered, under an overall neutral boundary condition for all the bulk charged species, the computed reaction rate kinetics recovers a similar Debye-Hückel limiting law again. This phenomenon implies that the charged product counteracts the influence of charged substrate on reaction rate coefficient. Our analysis discloses the fact that the total charge concentration of substrate and product, though in a nonequilibrium state individually, obeys an equilibrium Boltzmann distribution, and therefore contributes as a normal charged ion species to ionic strength. This explains why the Debye-Hückel limiting law still works in a considerable range of conditions even though the effects of charged substrate and product particles are not specifically and explicitly considered in the theory.
Poissonian renormalizations, exponentials, and power laws
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2013-05-01
This paper presents a comprehensive “renormalization study” of Poisson processes governed by exponential and power-law intensities. These Poisson processes are of fundamental importance, as they constitute the very bedrock of the universal extreme-value laws of Gumbel, Fréchet, and Weibull. Applying the method of Poissonian renormalization we analyze the emergence of these Poisson processes, unveil their intrinsic dynamical structures, determine their domains of attraction, and characterize their structural phase transitions. These structural phase transitions are shown to be governed by uniform and harmonic intensities, to have universal domains of attraction, to uniquely display intrinsic invariance, and to be intimately connected to “white noise” and to “1/f noise.” Thus, we establish a Poissonian explanation to the omnipresence of white and 1/f noises.
Wan, Wai-Yin; Chan, Jennifer S K
2009-08-01
For time series of count data, correlated measurements, clustering as well as excessive zeros occur simultaneously in biomedical applications. Ignoring such effects might contribute to misleading treatment outcomes. A generalized mixture Poisson geometric process (GMPGP) model and a zero-altered mixture Poisson geometric process (ZMPGP) model are developed from the geometric process model, which was originally developed for modelling positive continuous data and was extended to handle count data. These models are motivated by evaluating the trend development of new tumour counts for bladder cancer patients as well as by identifying useful covariates which affect the count level. The models are implemented using Bayesian method with Markov chain Monte Carlo (MCMC) algorithms and are assessed using deviance information criterion (DIC).
Poisson statistics of PageRank probabilities of Twitter and Wikipedia networks
NASA Astrophysics Data System (ADS)
Frahm, Klaus M.; Shepelyansky, Dima L.
2014-04-01
We use the methods of quantum chaos and Random Matrix Theory for analysis of statistical fluctuations of PageRank probabilities in directed networks. In this approach the effective energy levels are given by a logarithm of PageRank probability at a given node. After the standard energy level unfolding procedure we establish that the nearest spacing distribution of PageRank probabilities is described by the Poisson law typical for integrable quantum systems. Our studies are done for the Twitter network and three networks of Wikipedia editions in English, French and German. We argue that due to absence of level repulsion the PageRank order of nearby nodes can be easily interchanged. The obtained Poisson law implies that the nearby PageRank probabilities fluctuate as random independent variables.
Study of photon correlation techniques for processing of laser velocimeter signals
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.
1977-01-01
The objective was to provide the theory and a system design for a new type of photon counting processor for low level dual scatter laser velocimeter (LV) signals which would be capable of both the first order measurements of mean flow and turbulence intensity and also the second order time statistics: cross correlation auto correlation, and related spectra. A general Poisson process model for low level LV signals and noise which is valid from the photon-resolved regime all the way to the limiting case of nonstationary Gaussian noise was used. Computer simulation algorithms and higher order statistical moment analysis of Poisson processes were derived and applied to the analysis of photon correlation techniques. A system design using a unique dual correlate and subtract frequency discriminator technique is postulated and analyzed. Expectation analysis indicates that the objective measurements are feasible.
Numerical calibration of the stable poisson loaded specimen
NASA Technical Reports Server (NTRS)
Ghosn, Louis J.; Calomino, Anthony M.; Brewer, Dave N.
1992-01-01
An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-Curve determination. The crack mouth opening displacements (CMOD's) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMOD's, and crack displacement profiles are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length, thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.
Efficient statistical mapping of avian count data
Royle, J. Andrew; Wikle, C.K.
2005-01-01
We develop a spatial modeling framework for count data that is efficient to implement in high-dimensional prediction problems. We consider spectral parameterizations for the spatially varying mean of a Poisson model. The spectral parameterization of the spatial process is very computationally efficient, enabling effective estimation and prediction in large problems using Markov chain Monte Carlo techniques. We apply this model to creating avian relative abundance maps from North American Breeding Bird Survey (BBS) data. Variation in the ability of observers to count birds is modeled as spatially independent noise, resulting in over-dispersion relative to the Poisson assumption. This approach represents an improvement over existing approaches used for spatial modeling of BBS data which are either inefficient for continental scale modeling and prediction or fail to accommodate important distributional features of count data thus leading to inaccurate accounting of prediction uncertainty.
Analytical stress intensity solution for the Stable Poisson Loaded specimen
NASA Technical Reports Server (NTRS)
Ghosn, Louis J.; Calomino, Anthony M.; Brewer, David N.
1993-01-01
An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-curve determination. The crack mouth opening displacements (CMODs) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMODs, and crack displacement profiles, are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.
DISCRETE COMPOUND POISSON PROCESSES AND TABLES OF THE GEOMETRIC POISSON DISTRIBUTION.
A concise summary of the salient properties of discrete Poisson processes , with emphasis on comparing the geometric and logarithmic Poisson processes . The...the geometric Poisson process are given for 176 sets of parameter values. New discrete compound Poisson processes are also introduced. These...processes have properties that are particularly relevant when the summation of several different Poisson processes is to be analyzed. This study provides the
Ionic channels: natural nanotubes described by the drift diffusion equations
NASA Astrophysics Data System (ADS)
Eisenberg, Bob
2000-05-01
Ionic channels are a large class of proteins with holes down their middle that control a wide range of cellular functions important in health and disease. Ionic channels can be analysed using a combination of the Poisson and drift diffusion equations familiar from computational electronics because their behavior is dominated by the electrical properties of their simple structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, R. C.; Bruno, Giovanni; Onel, Yener
Microstructural changes in porous cordierite caused by machining were characterized using microtensile testing, X-ray computed tomography and scanning electron microscopy. Young s moduli and Poisson s ratios were determined on ~215-380 um thick machined samples by combining digital image correlation and microtensile loading. The results provide evidence for an increase in microcrack density due to machining of the thin samples extracted from diesel particulate filter honeycombs.
Multimedia Network Design Study
1989-09-30
manipulation and analysis of the equations involved, thereby providing the application of the great range of powerful mathematical optimization...be treated by this analysis. First, all arrivals to the network have the Poisson distribution, and separate traffic classes may have separate qrrival...different for open and closed networks, so these two situations will be treated separately in the following subsections. 2.3.1 The Computational Process in