Sample records for poisson error structure

  1. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    PubMed

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  2. Fuzzy classifier based support vector regression framework for Poisson ratio determination

    NASA Astrophysics Data System (ADS)

    Asoodeh, Mojtaba; Bagheripour, Parisa

    2013-09-01

    Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.

  3. A multiscale filter for noise reduction of low-dose cone beam projections.

    PubMed

    Yao, Weiguang; Farr, Jonathan B

    2015-08-21

    The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, exp(-x2/2σ(2)(f)) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of σ(f), which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ(2)(f)) is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024   ×   768 pixels.

  4. A multiscale filter for noise reduction of low-dose cone beam projections

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Farr, Jonathan B.

    2015-08-01

    The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, \\text{exp}≤ft(-{{x}2}/2σ f2\\right) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of {σf} , which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ f2 is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024   ×   768 pixels.

  5. A test of inflated zeros for Poisson regression models.

    PubMed

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  6. A comparison of different statistical methods analyzing hypoglycemia data using bootstrap simulations.

    PubMed

    Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory

    2015-01-01

    Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.

  7. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    NASA Astrophysics Data System (ADS)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  8. Poisson-Based Inference for Perturbation Models in Adaptive Spelling Training

    ERIC Educational Resources Information Center

    Baschera, Gian-Marco; Gross, Markus

    2010-01-01

    We present an inference algorithm for perturbation models based on Poisson regression. The algorithm is designed to handle unclassified input with multiple errors described by independent mal-rules. This knowledge representation provides an intelligent tutoring system with local and global information about a student, such as error classification…

  9. Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels

    ERIC Educational Resources Information Center

    Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.

    2018-01-01

    A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…

  10. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  11. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  12. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  13. A strategy for reducing gross errors in the generalized Born models of implicit solvation

    PubMed Central

    Onufriev, Alexey V.; Sigalov, Grigori

    2011-01-01

    The “canonical” generalized Born (GB) formula [C. Still, A. Tempczyk, R. C. Hawley, and T. Hendrickson, J. Am. Chem. Soc. 112, 6127 (1990)] is known to provide accurate estimates for total electrostatic solvation energies ΔGel of biomolecules if the corresponding effective Born radii are accurate. Here we show that even if the effective Born radii are perfectly accurate, the canonical formula still exhibits significant number of gross errors (errors larger than 2kBT relative to numerical Poisson equation reference) in pairwise interactions between individual atomic charges. Analysis of exact analytical solutions of the Poisson equation (PE) for several idealized nonspherical geometries reveals two distinct spatial modes of the PE solution; these modes are also found in realistic biomolecular shapes. The canonical GB Green function misses one of two modes seen in the exact PE solution, which explains the observed gross errors. To address the problem and reduce gross errors of the GB formalism, we have used exact PE solutions for idealized nonspherical geometries to suggest an alternative analytical Green function to replace the canonical GB formula. The proposed functional form is mathematically nearly as simple as the original, but depends not only on the effective Born radii but also on their gradients, which allows for better representation of details of nonspherical molecular shapes. In particular, the proposed functional form captures both modes of the PE solution seen in nonspherical geometries. Tests on realistic biomolecular structures ranging from small peptides to medium size proteins show that the proposed functional form reduces gross pairwise errors in all cases, with the amount of reduction varying from more than an order of magnitude for small structures to a factor of 2 for the largest ones. PMID:21528947

  14. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  15. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep

    PubMed Central

    Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel

    2008-01-01

    Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep. PMID:18558072

  16. On-the-fly Numerical Surface Integration for Finite-Difference Poisson-Boltzmann Methods.

    PubMed

    Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2011-11-01

    Most implicit solvation models require the definition of a molecular surface as the interface that separates the solute in atomic detail from the solvent approximated as a continuous medium. Commonly used surface definitions include the solvent accessible surface (SAS), the solvent excluded surface (SES), and the van der Waals surface. In this study, we present an efficient numerical algorithm to compute the SES and SAS areas to facilitate the applications of finite-difference Poisson-Boltzmann methods in biomolecular simulations. Different from previous numerical approaches, our algorithm is physics-inspired and intimately coupled to the finite-difference Poisson-Boltzmann methods to fully take advantage of its existing data structures. Our analysis shows that the algorithm can achieve very good agreement with the analytical method in the calculation of the SES and SAS areas. Specifically, in our comprehensive test of 1,555 molecules, the average unsigned relative error is 0.27% in the SES area calculations and 1.05% in the SAS area calculations at the grid spacing of 1/2Å. In addition, a systematic correction analysis can be used to improve the accuracy for the coarse-grid SES area calculations, with the average unsigned relative error in the SES areas reduced to 0.13%. These validation studies indicate that the proposed algorithm can be applied to biomolecules over a broad range of sizes and structures. Finally, the numerical algorithm can also be adapted to evaluate the surface integral of either a vector field or a scalar field defined on the molecular surface for additional solvation energetics and force calculations.

  17. Minimum risk wavelet shrinkage operator for Poisson image denoising.

    PubMed

    Cheng, Wu; Hirakawa, Keigo

    2015-05-01

    The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.

  18. NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.

  19. A discontinuous Poisson-Boltzmann equation with interfacial jump: homogenisation and residual error estimate.

    PubMed

    Fellner, Klemens; Kovtunenko, Victor A

    2016-01-01

    A nonlinear Poisson-Boltzmann equation with inhomogeneous Robin type boundary conditions at the interface between two materials is investigated. The model describes the electrostatic potential generated by a vector of ion concentrations in a periodic multiphase medium with dilute solid particles. The key issue stems from interfacial jumps, which necessitate discontinuous solutions to the problem. Based on variational techniques, we derive the homogenisation of the discontinuous problem and establish a rigorous residual error estimate up to the first-order correction.

  20. On a Poisson homogeneous space of bilinear forms with a Poisson-Lie action

    NASA Astrophysics Data System (ADS)

    Chekhov, L. O.; Mazzocco, M.

    2017-12-01

    Let \\mathscr A be the space of bilinear forms on C^N with defining matrices A endowed with a quadratic Poisson structure of reflection equation type. The paper begins with a short description of previous studies of the structure, and then this structure is extended to systems of bilinear forms whose dynamics is governed by the natural action A\\mapsto B ABT} of the {GL}_N Poisson-Lie group on \\mathscr A. A classification is given of all possible quadratic brackets on (B, A)\\in {GL}_N× \\mathscr A preserving the Poisson property of the action, thus endowing \\mathscr A with the structure of a Poisson homogeneous space. Besides the product Poisson structure on {GL}_N× \\mathscr A, there are two other (mutually dual) structures, which (unlike the product Poisson structure) admit reductions by the Dirac procedure to a space of bilinear forms with block upper triangular defining matrices. Further generalisations of this construction are considered, to triples (B,C, A)\\in {GL}_N× {GL}_N× \\mathscr A with the Poisson action A\\mapsto B ACT}, and it is shown that \\mathscr A then acquires the structure of a Poisson symmetric space. Generalisations to chains of transformations and to the quantum and quantum affine algebras are investigated, as well as the relations between constructions of Poisson symmetric spaces and the Poisson groupoid. Bibliography: 30 titles.

  1. Differential expression analysis for RNAseq using Poisson mixed models

    PubMed Central

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny

    2017-01-01

    Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632

  2. Modeling laser velocimeter signals as triply stochastic Poisson processes

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  3. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  4. Estimating random errors due to shot noise in backscatter lidar observations.

    PubMed

    Liu, Zhaoyan; Hunt, William; Vaughan, Mark; Hostetler, Chris; McGill, Matthew; Powell, Kathleen; Winker, David; Hu, Yongxiang

    2006-06-20

    We discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson- distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root mean square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF, uncertainties can be reliably calculated from or for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations lidar and tested using data from the Lidar In-space Technology Experiment.

  5. Estimating Random Errors Due to Shot Noise in Backscatter Lidar Observations

    NASA Technical Reports Server (NTRS)

    Liu, Zhaoyan; Hunt, William; Vaughan, Mark A.; Hostetler, Chris A.; McGill, Matthew J.; Powell, Kathy; Winker, David M.; Hu, Yongxiang

    2006-01-01

    In this paper, we discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson-distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root-mean-square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF uncertainties can be reliably calculated from/for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar and tested using data from the Lidar In-space Technology Experiment (LITE). OCIS Codes:

  6. Modeling number of claims and prediction of total claim amount

    NASA Astrophysics Data System (ADS)

    Acar, Aslıhan Şentürk; Karabey, Uǧur

    2017-07-01

    In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.

  7. Multilevel Sequential Monte Carlo Samplers for Normalizing Constants

    DOE PAGES

    Moral, Pierre Del; Jasra, Ajay; Law, Kody J. H.; ...

    2017-08-24

    This article considers the sequential Monte Carlo (SMC) approximation of ratios of normalizing constants associated to posterior distributions which in principle rely on continuum models. Therefore, the Monte Carlo estimation error and the discrete approximation error must be balanced. A multilevel strategy is utilized to substantially reduce the cost to obtain a given error level in the approximation as compared to standard estimators. Two estimators are considered and relative variance bounds are given. The theoretical results are numerically illustrated for two Bayesian inverse problems arising from elliptic partial differential equations (PDEs). The examples involve the inversion of observations of themore » solution of (i) a 1-dimensional Poisson equation to infer the diffusion coefficient, and (ii) a 2-dimensional Poisson equation to infer the external forcing.« less

  8. Poly-symplectic Groupoids and Poly-Poisson Structures

    NASA Astrophysics Data System (ADS)

    Martinez, Nicolas

    2015-05-01

    We introduce poly-symplectic groupoids, which are natural extensions of symplectic groupoids to the context of poly-symplectic geometry, and define poly-Poisson structures as their infinitesimal counterparts. We present equivalent descriptions of poly-Poisson structures, including one related with AV-Dirac structures. We also discuss symmetries and reduction in the setting of poly-symplectic groupoids and poly-Poisson structures, and use our viewpoint to revisit results and develop new aspects of the theory initiated in Iglesias et al. (Lett Math Phys 103:1103-1133, 2013).

  9. ? filtering for stochastic systems driven by Poisson processes

    NASA Astrophysics Data System (ADS)

    Song, Bo; Wu, Zheng-Guang; Park, Ju H.; Shi, Guodong; Zhang, Ya

    2015-01-01

    This paper investigates the ? filtering problem for stochastic systems driven by Poisson processes. By utilising the martingale theory such as the predictable projection operator and the dual predictable projection operator, this paper transforms the expectation of stochastic integral with respect to the Poisson process into the expectation of Lebesgue integral. Then, based on this, this paper designs an ? filter such that the filtering error system is mean-square asymptotically stable and satisfies a prescribed ? performance level. Finally, a simulation example is given to illustrate the effectiveness of the proposed filtering scheme.

  10. Determining the Uncertainty of X-Ray Absorption Measurements

    PubMed Central

    Wojcik, Gary S.

    2004-01-01

    X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627

  11. Differential expression analysis for RNAseq using Poisson mixed models.

    PubMed

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang

    2017-06-20

    Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. A stochastic-dynamic model for global atmospheric mass field statistics

    NASA Technical Reports Server (NTRS)

    Ghil, M.; Balgovind, R.; Kalnay-Rivas, E.

    1981-01-01

    A model that yields the spatial correlation structure of atmospheric mass field forecast errors was developed. The model is governed by the potential vorticity equation forced by random noise. Expansion in spherical harmonics and correlation function was computed analytically using the expansion coefficients. The finite difference equivalent was solved using a fast Poisson solver and the correlation function was computed using stratified sampling of the individual realization of F(omega) and hence of phi(omega). A higher order equation for gamma was derived and solved directly in finite differences by two successive applications of the fast Poisson solver. The methods were compared for accuracy and efficiency and the third method was chosen as clearly superior. The results agree well with the latitude dependence of observed atmospheric correlation data. The value of the parameter c sub o which gives the best fit to the data is close to the value expected from dynamical considerations.

  13. An empirical Bayes approach for the Poisson life distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1973-01-01

    A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

  14. A Negative Binomial Regression Model for Accuracy Tests

    ERIC Educational Resources Information Center

    Hung, Lai-Fa

    2012-01-01

    Rasch used a Poisson model to analyze errors and speed in reading tests. An important property of the Poisson distribution is that the mean and variance are equal. However, in social science research, it is very common for the variance to be greater than the mean (i.e., the data are overdispersed). This study embeds the Rasch model within an…

  15. Self consistent solution of Schrödinger Poisson equations and some electronic properties of ZnMgO/ZnO hetero structures

    NASA Astrophysics Data System (ADS)

    Uslu, Salih; Yarar, Zeki

    2017-02-01

    The epitaxial growth of quantum wells composed of high quality allows the production and application to their device of new structures in low dimensions. The potential profile at the junction is determined by free carriers and by the level of doping. Therefore, the shape of potential is obtained by the electron density. Energy level determines the number of electrons that can be occupied at every level. Energy levels and electron density values of each level must be calculated self consistently. Starting with V(z) test potential, wave functions and electron densities for each energy levels can be calculated to solve Schrödinger equation. If Poisson's equation is solved with the calculated electron density, the electrostatic potential can be obtained. The new V(z) potential can be calculated with using electrostatic potential found beforehand. Thus, the obtained values are calculated self consistently to a certain error criterion. In this study, the energy levels formed in the interfacial potential, electron density in each level and the wave function dependence of material parameters were investigated self consistently.

  16. Unimodularity criteria for Poisson structures on foliated manifolds

    NASA Astrophysics Data System (ADS)

    Pedroza, Andrés; Velasco-Barreras, Eduardo; Vorobiev, Yury

    2018-03-01

    We study the behavior of the modular class of an orientable Poisson manifold and formulate some unimodularity criteria in the semilocal context, around a (singular) symplectic leaf. Our results generalize some known unimodularity criteria for regular Poisson manifolds related to the notion of the Reeb class. In particular, we show that the unimodularity of the transverse Poisson structure of the leaf is a necessary condition for the semilocal unimodular property. Our main tool is an explicit formula for a bigraded decomposition of modular vector fields of a coupling Poisson structure on a foliated manifold. Moreover, we also exploit the notion of the modular class of a Poisson foliation and its relationship with the Reeb class.

  17. Number-counts slope estimation in the presence of Poisson noise

    NASA Technical Reports Server (NTRS)

    Schmitt, Juergen H. M. M.; Maccacaro, Tommaso

    1986-01-01

    The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.

  18. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data.

    PubMed

    Yelland, Lisa N; Salter, Amy B; Ryan, Philip

    2011-10-15

    Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

  19. Modified Regression Correlation Coefficient for Poisson Regression Model

    NASA Astrophysics Data System (ADS)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  20. Deformation mechanisms in negative Poisson's ratio materials - Structural aspects

    NASA Technical Reports Server (NTRS)

    Lakes, R.

    1991-01-01

    Poisson's ratio in materials is governed by the following aspects of the microstructure: the presence of rotational degrees of freedom, non-affine deformation kinematics, or anisotropic structure. Several structural models are examined. The non-affine kinematics are seen to be essential for the production of negative Poisson's ratios for isotropic materials containing central force linkages of positive stiffness. Non-central forces combined with pre-load can also give rise to a negative Poisson's ratio in isotropic materials. A chiral microstructure with non-central force interaction or non-affine deformation can also exhibit a negative Poisson's ratio. Toughness and damage resistance in these materials may be affected by the Poisson's ratio itself, as well as by generalized continuum aspects associated with the microstructure.

  1. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient.

    PubMed

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-06-10

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.

  2. Error analysis of finite element method for Poisson–Nernst–Planck equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yuzhou; Sun, Pengtao; Zheng, Bin

    A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.

  3. A finite-difference method for the variable coefficient Poisson equation on hierarchical Cartesian meshes

    NASA Astrophysics Data System (ADS)

    Raeli, Alice; Bergmann, Michel; Iollo, Angelo

    2018-02-01

    We consider problems governed by a linear elliptic equation with varying coefficients across internal interfaces. The solution and its normal derivative can undergo significant variations through these internal boundaries. We present a compact finite-difference scheme on a tree-based adaptive grid that can be efficiently solved using a natively parallel data structure. The main idea is to optimize the truncation error of the discretization scheme as a function of the local grid configuration to achieve second-order accuracy. Numerical illustrations are presented in two and three-dimensional configurations.

  4. POSTPROCESSING MIXED FINITE ELEMENT METHODS FOR SOLVING CAHN-HILLIARD EQUATION: METHODS AND ERROR ANALYSIS

    PubMed Central

    Wang, Wansheng; Chen, Long; Zhou, Jie

    2015-01-01

    A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063

  5. A GPU accelerated and error-controlled solver for the unbounded Poisson equation in three dimensions

    NASA Astrophysics Data System (ADS)

    Exl, Lukas

    2017-12-01

    An efficient solver for the three dimensional free-space Poisson equation is presented. The underlying numerical method is based on finite Fourier series approximation. While the error of all involved approximations can be fully controlled, the overall computation error is driven by the convergence of the finite Fourier series of the density. For smooth and fast-decaying densities the proposed method will be spectrally accurate. The method scales with O(N log N) operations, where N is the total number of discretization points in the Cartesian grid. The majority of the computational costs come from fast Fourier transforms (FFT), which makes it ideal for GPU computation. Several numerical computations on CPU and GPU validate the method and show efficiency and convergence behavior. Tests are performed using the Vienna Scientific Cluster 3 (VSC3). A free MATLAB implementation for CPU and GPU is provided to the interested community.

  6. Tenth NASTRAN User's Colloquium

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The development of the NASTRAN computer program, a general purpose finite element computer code for structural analysis, was discussed. The application and development of NASTRAN is presented in the following topics: improvements and enhancements; developments of pre and postprocessors; interactive review system; the use of harmonic expansions in magnetic field problems; improving a dynamic model with test data using Linwood; solution of axisymmetric fluid structure interaction problems; large displacements and stability analysis of nonlinear propeller structures; prediction of bead area contact load at the tire wheel interface; elastic plastic analysis of an overloaded breech ring; finite element solution of torsion and other 2-D Poisson equations; new capability for elastic aircraft airloads; usage of substructuring analysis in the get away special program; solving symmetric structures with nonsymmetric loads; evaluation and reduction of errors induced by Guyan transformation.

  7. Prediction of forest fires occurrences with area-level Poisson mixed models.

    PubMed

    Boubeta, Miguel; Lombardía, María José; Marey-Pérez, Manuel Francisco; Morales, Domingo

    2015-05-01

    The number of fires in forest areas of Galicia (north-west of Spain) during the summer period is quite high. Local authorities are interested in analyzing the factors that explain this phenomenon. Poisson regression models are good tools for describing and predicting the number of fires per forest areas. This work employs area-level Poisson mixed models for treating real data about fires in forest areas. A parametric bootstrap method is applied for estimating the mean squared errors of fires predictors. The developed methodology and software are applied to a real data set of fires in forest areas of Galicia. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.

    PubMed

    Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe

    2014-03-01

    The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.

  9. DL_MG: A Parallel Multigrid Poisson and Poisson-Boltzmann Solver for Electronic Structure Calculations in Vacuum and Solution.

    PubMed

    Womack, James C; Anton, Lucian; Dziedzic, Jacek; Hasnip, Phil J; Probert, Matt I J; Skylaris, Chris-Kriton

    2018-03-13

    The solution of the Poisson equation is a crucial step in electronic structure calculations, yielding the electrostatic potential-a key component of the quantum mechanical Hamiltonian. In recent decades, theoretical advances and increases in computer performance have made it possible to simulate the electronic structure of extended systems in complex environments. This requires the solution of more complicated variants of the Poisson equation, featuring nonhomogeneous dielectric permittivities, ionic concentrations with nonlinear dependencies, and diverse boundary conditions. The analytic solutions generally used to solve the Poisson equation in vacuum (or with homogeneous permittivity) are not applicable in these circumstances, and numerical methods must be used. In this work, we present DL_MG, a flexible, scalable, and accurate solver library, developed specifically to tackle the challenges of solving the Poisson equation in modern large-scale electronic structure calculations on parallel computers. Our solver is based on the multigrid approach and uses an iterative high-order defect correction method to improve the accuracy of solutions. Using two chemically relevant model systems, we tested the accuracy and computational performance of DL_MG when solving the generalized Poisson and Poisson-Boltzmann equations, demonstrating excellent agreement with analytic solutions and efficient scaling to ∼10 9 unknowns and 100s of CPU cores. We also applied DL_MG in actual large-scale electronic structure calculations, using the ONETEP linear-scaling electronic structure package to study a 2615 atom protein-ligand complex with routinely available computational resources. In these calculations, the overall execution time with DL_MG was not significantly greater than the time required for calculations using a conventional FFT-based solver.

  10. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

  11. Normal forms for Poisson maps and symplectic groupoids around Poisson transversals

    NASA Astrophysics Data System (ADS)

    Frejlich, Pedro; Mărcuț, Ioan

    2018-03-01

    Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.

  12. Normal forms for Poisson maps and symplectic groupoids around Poisson transversals.

    PubMed

    Frejlich, Pedro; Mărcuț, Ioan

    2018-01-01

    Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.

  13. Nambu-Poisson gauge theory

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Schupp, Peter; Vysoký, Jan

    2014-06-01

    We generalize noncommutative gauge theory using Nambu-Poisson structures to obtain a new type of gauge theory with higher brackets and gauge fields. The approach is based on covariant coordinates and higher versions of the Seiberg-Witten map. We construct a covariant Nambu-Poisson gauge theory action, give its first order expansion in the Nambu-Poisson tensor and relate it to a Nambu-Poisson matrix model.

  14. Poisson structure of dynamical systems with three degrees of freedom

    NASA Astrophysics Data System (ADS)

    Gümral, Hasan; Nutku, Yavuz

    1993-12-01

    It is shown that the Poisson structure of dynamical systems with three degrees of freedom can be defined in terms of an integrable one-form in three dimensions. Advantage is taken of this fact and the theory of foliations is used in discussing the geometrical structure underlying complete and partial integrability. Techniques for finding Poisson structures are presented and applied to various examples such as the Halphen system which has been studied as the two-monopole problem by Atiyah and Hitchin. It is shown that the Halphen system can be formulated in terms of a flat SL(2,R)-valued connection and belongs to a nontrivial Godbillon-Vey class. On the other hand, for the Euler top and a special case of three-species Lotka-Volterra equations which are contained in the Halphen system as limiting cases, this structure degenerates into the form of globally integrable bi-Hamiltonian structures. The globally integrable bi-Hamiltonian case is a linear and the SL(2,R) structure is a quadratic unfolding of an integrable one-form in 3+1 dimensions. It is shown that the existence of a vector field compatible with the flow is a powerful tool in the investigation of Poisson structure and some new techniques for incorporating arbitrary constants into the Poisson one-form are presented herein. This leads to some extensions, analogous to q extensions, of Poisson structure. The Kermack-McKendrick model and some of its generalizations describing the spread of epidemics, as well as the integrable cases of the Lorenz, Lotka-Volterra, May-Leonard, and Maxwell-Bloch systems admit globally integrable bi-Hamiltonian structure.

  15. Reduction of Poisson noise in measured time-resolved data for time-domain diffuse optical tomography.

    PubMed

    Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y

    2012-01-01

    A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.

  16. Analysis of overdispersed count data: application to the Human Papillomavirus Infection in Men (HIM) Study.

    PubMed

    Lee, J-H; Han, G; Fulp, W J; Giuliano, A R

    2012-06-01

    The Poisson model can be applied to the count of events occurring within a specific time period. The main feature of the Poisson model is the assumption that the mean and variance of the count data are equal. However, this equal mean-variance relationship rarely occurs in observational data. In most cases, the observed variance is larger than the assumed variance, which is called overdispersion. Further, when the observed data involve excessive zero counts, the problem of overdispersion results in underestimating the variance of the estimated parameter, and thus produces a misleading conclusion. We illustrated the use of four models for overdispersed count data that may be attributed to excessive zeros. These are Poisson, negative binomial, zero-inflated Poisson and zero-inflated negative binomial models. The example data in this article deal with the number of incidents involving human papillomavirus infection. The four models resulted in differing statistical inferences. The Poisson model, which is widely used in epidemiology research, underestimated the standard errors and overstated the significance of some covariates.

  17. Protein-ligand binding free energy estimation using molecular mechanics and continuum electrostatics. Application to HIV-1 protease inhibitors

    NASA Astrophysics Data System (ADS)

    Zoete, V.; Michielin, O.; Karplus, M.

    2003-12-01

    A method is proposed for the estimation of absolute binding free energy of interaction between proteins and ligands. Conformational sampling of the protein-ligand complex is performed by molecular dynamics (MD) in vacuo and the solvent effect is calculated a posteriori by solving the Poisson or the Poisson-Boltzmann equation for selected frames of the trajectory. The binding free energy is written as a linear combination of the buried surface upon complexation, SAS bur, the electrostatic interaction energy between the ligand and the protein, Eelec, and the difference of the solvation free energies of the complex and the isolated ligand and protein, ΔGsolv. The method uses the buried surface upon complexation to account for the non-polar contribution to the binding free energy because it is less sensitive to the details of the structure than the van der Waals interaction energy. The parameters of the method are developed for a training set of 16 HIV-1 protease-inhibitor complexes of known 3D structure. A correlation coefficient of 0.91 was obtained with an unsigned mean error of 0.8 kcal/mol. When applied to a set of 25 HIV-1 protease-inhibitor complexes of unknown 3D structures, the method provides a satisfactory correlation between the calculated binding free energy and the experimental pIC 50 without reparametrization.

  18. Comparison of INAR(1)-Poisson model and Markov prediction model in forecasting the number of DHF patients in west java Indonesia

    NASA Astrophysics Data System (ADS)

    Ahdika, Atina; Lusiyana, Novyan

    2017-02-01

    World Health Organization (WHO) noted Indonesia as the country with the highest dengue (DHF) cases in Southeast Asia. There are no vaccine and specific treatment for DHF. One of the efforts which can be done by both government and resident is doing a prevention action. In statistics, there are some methods to predict the number of DHF cases to be used as the reference to prevent the DHF cases. In this paper, a discrete time series model, INAR(1)-Poisson model in specific, and Markov prediction model are used to predict the number of DHF patients in West Java Indonesia. The result shows that MPM is the best model since it has the smallest value of MAE (mean absolute error) and MAPE (mean absolute percentage error).

  19. Fractional poisson--a simple dose-response model for human norovirus.

    PubMed

    Messner, Michael J; Berger, Philip; Nappier, Sharon P

    2014-10-01

    This study utilizes old and new Norovirus (NoV) human challenge data to model the dose-response relationship for human NoV infection. The combined data set is used to update estimates from a previously published beta-Poisson dose-response model that includes parameters for virus aggregation and for a beta-distribution that describes variable susceptibility among hosts. The quality of the beta-Poisson model is examined and a simpler model is proposed. The new model (fractional Poisson) characterizes hosts as either perfectly susceptible or perfectly immune, requiring a single parameter (the fraction of perfectly susceptible hosts) in place of the two-parameter beta-distribution. A second parameter is included to account for virus aggregation in the same fashion as it is added to the beta-Poisson model. Infection probability is simply the product of the probability of nonzero exposure (at least one virus or aggregate is ingested) and the fraction of susceptible hosts. The model is computationally simple and appears to be well suited to the data from the NoV human challenge studies. The model's deviance is similar to that of the beta-Poisson, but with one parameter, rather than two. As a result, the Akaike information criterion favors the fractional Poisson over the beta-Poisson model. At low, environmentally relevant exposure levels (<100), estimation error is small for the fractional Poisson model; however, caution is advised because no subjects were challenged at such a low dose. New low-dose data would be of great value to further clarify the NoV dose-response relationship and to support improved risk assessment for environmentally relevant exposures. © 2014 Society for Risk Analysis Published 2014. This article is a U.S. Government work and is in the public domain for the U.S.A.

  20. Finite element solution of torsion and other 2-D Poisson equations

    NASA Technical Reports Server (NTRS)

    Everstine, G. C.

    1982-01-01

    The NASTRAN structural analysis computer program may be used, without modification, to solve two dimensional Poisson equations such as arise in the classical Saint Venant torsion problem. The nonhomogeneous term (the right-hand side) in the Poisson equation can be handled conveniently by specifying a gravitational load in a "structural" analysis. The use of an analogy between the equations of elasticity and those of classical mathematical physics is summarized in detail.

  1. Particle trapping: A key requisite of structure formation and stability of Vlasov–Poisson plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schamel, Hans, E-mail: hans.schamel@uni-bayreuth.de

    2015-04-15

    Particle trapping is shown to control the existence of undamped coherent structures in Vlasov–Poisson plasmas and thereby affects the onset of plasma instability beyond the realm of linear Landau theory.

  2. Observer error structure in bull trout redd counts in Montana streams: Implications for inference on true redd numbers

    USGS Publications Warehouse

    Muhlfeld, Clint C.; Taper, Mark L.; Staples, David F.; Shepard, Bradley B.

    2006-01-01

    Despite the widespread use of redd counts to monitor trends in salmonid populations, few studies have evaluated the uncertainties in observed counts. We assessed the variability in redd counts for migratory bull trout Salvelinus confluentus among experienced observers in Lion and Goat creeks, which are tributaries to the Swan River, Montana. We documented substantially lower observer variability in bull trout redd counts than did previous studies. Observer counts ranged from 78% to 107% of our best estimates of true redd numbers in Lion Creek and from 90% to 130% of our best estimates in Goat Creek. Observers made both errors of omission and errors of false identification, and we modeled this combination by use of a binomial probability of detection and a Poisson count distribution of false identifications. Redd detection probabilities were high (mean = 83%) and exhibited no significant variation among observers (SD = 8%). We applied this error structure to annual redd counts in the Swan River basin (1982–2004) to correct for observer error and thus derived more accurate estimates of redd numbers and associated confidence intervals. Our results indicate that bias in redd counts can be reduced if experienced observers are used to conduct annual redd counts. Future studies should assess both sources of observer error to increase the validity of using redd counts for inferring true redd numbers in different basins. This information will help fisheries biologists to more precisely monitor population trends, identify recovery and extinction thresholds for conservation and recovery programs, ascertain and predict how management actions influence distribution and abundance, and examine effects of recovery and restoration activities.

  3. Hamiltonian structure and Darboux theorem for families of generalized Lotka-Volterra systems

    NASA Astrophysics Data System (ADS)

    Hernández-Bermejo, Benito; Fairén, Víctor

    1998-11-01

    This work is devoted to the establishment of a Poisson structure for a format of equations known as generalized Lotka-Volterra systems. These equations, which include the classical Lotka-Volterra systems as a particular case, have been deeply studied in the literature. They have been shown to constitute a whole hierarchy of systems, the characterization of which is made in the context of simple algebra. Our main result is to show that this algebraic structure is completely translatable into the Poisson domain. Important Poisson structures features, such as the symplectic foliation and the Darboux canonical representation, rise as a result of rather simple matrix manipulations.

  4. Concurrent topological design of composite structures and materials containing multiple phases of distinct Poisson's ratios

    NASA Astrophysics Data System (ADS)

    Long, Kai; Yuan, Philip F.; Xu, Shanqing; Xie, Yi Min

    2018-04-01

    Most studies on composites assume that the constituent phases have different values of stiffness. Little attention has been paid to the effect of constituent phases having distinct Poisson's ratios. This research focuses on a concurrent optimization method for simultaneously designing composite structures and materials with distinct Poisson's ratios. The proposed method aims to minimize the mean compliance of the macrostructure with a given mass of base materials. In contrast to the traditional interpolation of the stiffness matrix through numerical results, an interpolation scheme of the Young's modulus and Poisson's ratio using different parameters is adopted. The numerical results demonstrate that the Poisson effect plays a key role in reducing the mean compliance of the final design. An important contribution of the present study is that the proposed concurrent optimization method can automatically distribute base materials with distinct Poisson's ratios between the macrostructural and microstructural levels under a single constraint of the total mass.

  5. Symplectic discretization for spectral element solution of Maxwell's equations

    NASA Astrophysics Data System (ADS)

    Zhao, Yanmin; Dai, Guidong; Tang, Yifa; Liu, Qinghuo

    2009-08-01

    Applying the spectral element method (SEM) based on the Gauss-Lobatto-Legendre (GLL) polynomial to discretize Maxwell's equations, we obtain a Poisson system or a Poisson system with at most a perturbation. For the system, we prove that any symplectic partitioned Runge-Kutta (PRK) method preserves the Poisson structure and its implied symplectic structure. Numerical examples show the high accuracy of SEM and the benefit of conserving energy due to the use of symplectic methods.

  6. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    PubMed

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  7. Parameter Estimation in Astronomy with Poisson-Distributed Data. 1; The (CHI)2(gamma) Statistic

    NASA Technical Reports Server (NTRS)

    Mighell, Kenneth J.

    1999-01-01

    Applying the standard weighted mean formula, [Sigma (sub i)n(sub i)ssigma(sub i, sup -2)], to determine the weighted mean of data, n(sub i), drawn from a Poisson distribution, will, on average, underestimate the true mean by approx. 1 for all true mean values larger than approx.3 when the common assumption is made that the error of the i th observation is sigma(sub i) = max square root of n(sub i), 1).This small, but statistically significant offset, explains the long-known observation that chi-square minimization techniques which use the modified Neyman'chi(sub 2) statistic, chi(sup 2, sub N) equivalent Sigma(sub i)((n(sub i) - y(sub i)(exp 2)) / max(n(sub i), 1), to compare Poisson - distributed data with model values, y(sub i), will typically predict a total number of counts that underestimates the true total by about 1 count per bin. Based on my finding that weighted mean of data drawn from a Poisson distribution can be determined using the formula [Sigma(sub i)[n(sub i) + min(n(sub i), 1)](n(sub i) + 1)(exp -1)] / [Sigma(sub i)(n(sub i) + 1)(exp -1))], I propose that a new chi(sub 2) statistic, chi(sup 2, sub gamma) equivalent, should always be used to analyze Poisson- distributed data in preference to the modified Neyman's chi(exp 2) statistic. I demonstrated the power and usefulness of,chi(sub gamma, sup 2) minimization by using two statistical fitting techniques and five chi(exp 2) statistics to analyze simulated X-ray power - low 15 - channel spectra with large and small counts per bin. I show that chi(sub gamma, sup 2) minimization with the Levenberg - Marquardt or Powell's method can produce excellent results (mean slope errors approx. less than 3%) with spectra having as few as 25 total counts.

  8. Long-term statistics of extreme tsunami height at Crescent City

    NASA Astrophysics Data System (ADS)

    Dong, Sheng; Zhai, Jinjin; Tao, Shanshan

    2017-06-01

    Historically, Crescent City is one of the most vulnerable communities impacted by tsunamis along the west coast of the United States, largely attributed to its offshore geography. Trans-ocean tsunamis usually produce large wave runup at Crescent Harbor resulting in catastrophic damages, property loss and human death. How to determine the return values of tsunami height using relatively short-term observation data is of great significance to assess the tsunami hazards and improve engineering design along the coast of Crescent City. In the present study, the extreme tsunami heights observed along the coast of Crescent City from 1938 to 2015 are fitted using six different probabilistic distributions, namely, the Gumbel distribution, the Weibull distribution, the maximum entropy distribution, the lognormal distribution, the generalized extreme value distribution and the generalized Pareto distribution. The maximum likelihood method is applied to estimate the parameters of all above distributions. Both Kolmogorov-Smirnov test and root mean square error method are utilized for goodness-of-fit test and the better fitting distribution is selected. Assuming that the occurrence frequency of tsunami in each year follows the Poisson distribution, the Poisson compound extreme value distribution can be used to fit the annual maximum tsunami amplitude, and then the point and interval estimations of return tsunami heights are calculated for structural design. The results show that the Poisson compound extreme value distribution fits tsunami heights very well and is suitable to determine the return tsunami heights for coastal disaster prevention.

  9. Convergence of Spectral Discretizations of the Vlasov--Poisson System

    DOE PAGES

    Manzini, G.; Funaro, D.; Delzanno, G. L.

    2017-09-26

    Here we prove the convergence of a spectral discretization of the Vlasov-Poisson system. The velocity term of the Vlasov equation is discretized using either Hermite functions on the infinite domain or Legendre polynomials on a bounded domain. The spatial term of the Vlasov and Poisson equations is discretized using periodic Fourier expansions. Boundary conditions are treated in weak form through a penalty type term that can be applied also in the Hermite case. As a matter of fact, stability properties of the approximated scheme descend from this added term. The convergence analysis is carried out in detail for the 1D-1Vmore » case, but results can be generalized to multidimensional domains, obtained as Cartesian product, in both space and velocity. The error estimates show the spectral convergence under suitable regularity assumptions on the exact solution.« less

  10. Effect of Poisson's loss factor of rubbery material on underwater sound absorption of anechoic coatings

    NASA Astrophysics Data System (ADS)

    Zhong, Jie; Zhao, Honggang; Yang, Haibin; Yin, Jianfei; Wen, Jihong

    2018-06-01

    Rubbery coatings embedded with air cavities are commonly used on underwater structures to reduce reflection of incoming sound waves. In this paper, the relationships between Poisson's and modulus loss factors of rubbery materials are theoretically derived, the different effects of the tiny Poisson's loss factor on characterizing the loss factors of shear and longitudinal moduli are revealed. Given complex Young's modulus and dynamic Poisson's ratio, it is found that the shear loss factor has almost invisible variation with the Poisson's loss factor and is very close to the loss factor of Young's modulus, while the longitudinal loss factor almost linearly decreases with the increase of Poisson's loss factor. Then, a finite element (FE) model is used to investigate the effect of the tiny Poisson's loss factor, which is generally neglected in some FE models, on the underwater sound absorption of rubbery coatings. Results show that the tiny Poisson's loss factor has a significant effect on the sound absorption of homogeneous coatings within the concerned frequency range, while it has both frequency- and structure-dependent influence on the sound absorption of inhomogeneous coatings with embedded air cavities. Given the material parameters and cavity dimensions, more obvious effect can be observed for the rubbery coating with a larger lattice constant and/or a thicker cover layer.

  11. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.

    1987-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.

  12. Super-stable Poissonian structures

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2012-10-01

    In this paper we characterize classes of Poisson processes whose statistical structures are super-stable. We consider a flow generated by a one-dimensional ordinary differential equation, and an ensemble of particles ‘surfing’ the flow. The particles start from random initial positions, and are propagated along the flow by stochastic ‘wave processes’ with general statistics and general cross correlations. Setting the initial positions to be Poisson processes, we characterize the classes of Poisson processes that render the particles’ positions—at all times, and invariantly with respect to the wave processes—statistically identical to their initial positions. These Poisson processes are termed ‘super-stable’ and facilitate the generalization of the notion of stationary distributions far beyond the realm of Markov dynamics.

  13. Super-integrable Calogero-type systems admit maximal number of Poisson structures

    NASA Astrophysics Data System (ADS)

    Gonera, C.; Nutku, Y.

    2001-07-01

    We present a general scheme for constructing the Poisson structure of super-integrable dynamical systems of which the rational Calogero-Moser system is the most interesting one. This dynamical system is 2 N-dimensional with 2 N-1 first integrals and our construction yields 2 N-1 degenerate Poisson tensors that each admit 2( N-1) Casimirs. Our results are quite generally applicable to all super-integrable systems and form an alternative to the traditional bi-Hamiltonian approach.

  14. An exterior Poisson solver using fast direct methods and boundary integral equations with applications to nonlinear potential flow

    NASA Technical Reports Server (NTRS)

    Young, D. P.; Woo, A. C.; Bussoletti, J. E.; Johnson, F. T.

    1986-01-01

    A general method is developed combining fast direct methods and boundary integral equation methods to solve Poisson's equation on irregular exterior regions. The method requires O(N log N) operations where N is the number of grid points. Error estimates are given that hold for regions with corners and other boundary irregularities. Computational results are given in the context of computational aerodynamics for a two-dimensional lifting airfoil. Solutions of boundary integral equations for lifting and nonlifting aerodynamic configurations using preconditioned conjugate gradient are examined for varying degrees of thinness.

  15. Inverse Jacobi multiplier as a link between conservative systems and Poisson structures

    NASA Astrophysics Data System (ADS)

    García, Isaac A.; Hernández-Bermejo, Benito

    2017-08-01

    Some aspects of the relationship between conservativeness of a dynamical system (namely the preservation of a finite measure) and the existence of a Poisson structure for that system are analyzed. From the local point of view, due to the flow-box theorem we restrict ourselves to neighborhoods of singularities. In this sense, we characterize Poisson structures around the typical zero-Hopf singularity in dimension 3 under the assumption of having a local analytic first integral with non-vanishing first jet by connecting with the classical Poincaré center problem. From the global point of view, we connect the property of being strictly conservative (the invariant measure must be positive) with the existence of a Poisson structure depending on the phase space dimension. Finally, weak conservativeness in dimension two is introduced by the extension of inverse Jacobi multipliers as weak solutions of its defining partial differential equation and some of its applications are developed. Examples including Lotka-Volterra systems, quadratic isochronous centers, and non-smooth oscillators are provided.

  16. The Constitutive Modeling of Thin Films with Randon Material Wrinkles

    NASA Technical Reports Server (NTRS)

    Murphey, Thomas W.; Mikulas, Martin M.

    2001-01-01

    Material wrinkles drastically alter the structural constitutive properties of thin films. Normally linear elastic materials, when wrinkled, become highly nonlinear and initially inelastic. Stiffness' reduced by 99% and negative Poisson's ratios are typically observed. This paper presents an effective continuum constitutive model for the elastic effects of material wrinkles in thin films. The model considers general two-dimensional stress and strain states (simultaneous bi-axial and shear stress/strain) and neglects out of plane bending. The constitutive model is derived from a traditional mechanics analysis of an idealized physical model of random material wrinkles. Model parameters are the directly measurable wrinkle characteristics of amplitude and wavelength. For these reasons, the equations are mechanistic and deterministic. The model is compared with bi-axial tensile test data for wrinkled Kaptong(Registered Trademark) HN and is shown to deterministically predict strain as a function of stress with an average RMS error of 22%. On average, fitting the model to test data yields an RMS error of 1.2%

  17. Rotational motions for teleseismic surface waves

    NASA Astrophysics Data System (ADS)

    Lin, Chin-Jen; Huang, Han-Pang; Pham, Nguyen Dinh; Liu, Chun-Chi; Chi, Wu-Cheng; Lee, William H. K.

    2011-08-01

    We report the findings for the first teleseismic six degree-of-freedom (6-DOF) measurements including three components of rotational motions recorded by a sensitive rotation-rate sensor (model R-1, made by eentec) and three components of translational motions recorded by a traditional seismometer (STS-2) at the NACB station in Taiwan. The consistent observations in waveforms of rotational motions and translational motions in sections of Rayleigh and Love waves are presented in reference to the analytical solution for these waves in a half space of Poisson solid. We show that additional information (e.g., Rayleigh wave phase velocity, shear wave velocity of the surface layer) might be exploited from six degree-of-freedom recordings of teleseismic events at only one station. We also find significant errors in the translational records of these teleseismic surface waves due to the sensitivity of inertial translation sensors (seismometers) to rotational motions. The result suggests that the effects of such errors need to be counted in surface wave inversions commonly used to derive earthquake source parameters and Earth structure.

  18. Effects of adaptive refinement on the inverse EEG solution

    NASA Astrophysics Data System (ADS)

    Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.

    1995-10-01

    One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.

  19. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    DOE PAGES

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...

    2016-04-19

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  20. AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xingyu; Samulyak, Roman, E-mail: roman.samulyak@stonybrook.edu; Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  1. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  2. Poisson regression models outperform the geometrical model in estimating the peak-to-trough ratio of seasonal variation: a simulation study.

    PubMed

    Christensen, A L; Lundbye-Christensen, S; Dethlefsen, C

    2011-12-01

    Several statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards' estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates. Based on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough[13]. Applying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model. This simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  3. Structural interactions in ionic liquids linked to higher-order Poisson-Boltzmann equations

    NASA Astrophysics Data System (ADS)

    Blossey, R.; Maggs, A. C.; Podgornik, R.

    2017-06-01

    We present a derivation of generalized Poisson-Boltzmann equations starting from classical theories of binary fluid mixtures, employing an approach based on the Legendre transform as recently applied to the case of local descriptions of the fluid free energy. Under specific symmetry assumptions, and in the linearized regime, the Poisson-Boltzmann equation reduces to a phenomenological equation introduced by Bazant et al. [Phys. Rev. Lett. 106, 046102 (2011)], 10.1103/PhysRevLett.106.046102, whereby the structuring near the surface is determined by bulk coefficients.

  4. Explicitly Representing the Solvation Shell in Continuum Solvent Calculations

    PubMed Central

    Svendsen, Hallvard F.; Merz, Kenneth M.

    2009-01-01

    A method is presented to explicitly represent the first solvation shell in continuum solvation calculations. Initial solvation shell geometries were generated with classical molecular dynamics simulations. Clusters consisting of solute and 5 solvent molecules were fully relaxed in quantum mechanical calculations. The free energy of solvation of the solute was calculated from the free energy of formation of the cluster and the solvation free energy of the cluster calculated with continuum solvation models. The method has been implemented with two continuum solvation models, a Poisson-Boltzmann model and the IEF-PCM model. Calculations were carried out for a set of 60 ionic species. Implemented with the Poisson-Boltzmann model the method gave an unsigned average error of 2.1 kcal/mol and a RMSD of 2.6 kcal/mol for anions, for cations the unsigned average error was 2.8 kcal/mol and the RMSD 3.9 kcal/mol. Similar results were obtained with the IEF-PCM model. PMID:19425558

  5. A comparison of image restoration approaches applied to three-dimensional confocal and wide-field fluorescence microscopy.

    PubMed

    Verveer, P. J; Gemkow, M. J; Jovin, T. M

    1999-01-01

    We have compared different image restoration approaches for fluorescence microscopy. The most widely used algorithms were classified with a Bayesian theory according to the assumed noise model and the type of regularization imposed. We considered both Gaussian and Poisson models for the noise in combination with Tikhonov regularization, entropy regularization, Good's roughness and without regularization (maximum likelihood estimation). Simulations of fluorescence confocal imaging were used to examine the different noise models and regularization approaches using the mean squared error criterion. The assumption of a Gaussian noise model yielded only slightly higher errors than the Poisson model. Good's roughness was the best choice for the regularization. Furthermore, we compared simulated confocal and wide-field data. In general, restored confocal data are superior to restored wide-field data, but given sufficient higher signal level for the wide-field data the restoration result may rival confocal data in quality. Finally, a visual comparison of experimental confocal and wide-field data is presented.

  6. Estimating False Positive Contamination in Crater Annotations from Citizen Science Data

    NASA Astrophysics Data System (ADS)

    Tar, P. D.; Bugiolacchi, R.; Thacker, N. A.; Gilmour, J. D.

    2017-01-01

    Web-based citizen science often involves the classification of image features by large numbers of minimally trained volunteers, such as the identification of lunar impact craters under the Moon Zoo project. Whilst such approaches facilitate the analysis of large image data sets, the inexperience of users and ambiguity in image content can lead to contamination from false positive identifications. We give an approach, using Linear Poisson Models and image template matching, that can quantify levels of false positive contamination in citizen science Moon Zoo crater annotations. Linear Poisson Models are a form of machine learning which supports predictive error modelling and goodness-of-fits, unlike most alternative machine learning methods. The proposed supervised learning system can reduce the variability in crater counts whilst providing predictive error assessments of estimated quantities of remaining true verses false annotations. In an area of research influenced by human subjectivity, the proposed method provides a level of objectivity through the utilisation of image evidence, guided by candidate crater identifications.

  7. Complex wet-environments in electronic-structure calculations

    NASA Astrophysics Data System (ADS)

    Fisicaro, Giuseppe; Genovese, Luigi; Andreussi, Oliviero; Marzari, Nicola; Goedecker, Stefan

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of an applied electrochemical potentials, including complex electrostatic screening coming from the solvent. In the present work we present a solver to handle both the Generalized Poisson and the Poisson-Boltzmann equation. A preconditioned conjugate gradient (PCG) method has been implemented for the Generalized Poisson and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations. On the other hand, a self-consistent procedure enables us to solve the Poisson-Boltzmann problem. The algorithms take advantage of a preconditioning procedure based on the BigDFT Poisson solver for the standard Poisson equation. They exhibit very high accuracy and parallel efficiency, and allow different boundary conditions, including surfaces. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and it will be released as a independent program, suitable for integration in other codes. We present test calculations for large proteins to demonstrate efficiency and performances. This work was done within the PASC and NCCR MARVEL projects. Computer resources were provided by the Swiss National Supercomputing Centre (CSCS) under Project ID s499. LG acknowledges also support from the EXTMOS EU project.

  8. Design of materials with prescribed nonlinear properties

    NASA Astrophysics Data System (ADS)

    Wang, F.; Sigmund, O.; Jensen, J. S.

    2014-09-01

    We systematically design materials using topology optimization to achieve prescribed nonlinear properties under finite deformation. Instead of a formal homogenization procedure, a numerical experiment is proposed to evaluate the material performance in longitudinal and transverse tensile tests under finite deformation, i.e. stress-strain relations and Poissons ratio. By minimizing errors between actual and prescribed properties, materials are tailored to achieve the target. Both two dimensional (2D) truss-based and continuum materials are designed with various prescribed nonlinear properties. The numerical examples illustrate optimized materials with rubber-like behavior and also optimized materials with extreme strain-independent Poissons ratio for axial strain intervals of εi∈[0.00, 0.30].

  9. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  10. Weather variability and the incidence of cryptosporidiosis: comparison of time series poisson regression and SARIMA models.

    PubMed

    Hu, Wenbiao; Tong, Shilu; Mengersen, Kerrie; Connell, Des

    2007-09-01

    Few studies have examined the relationship between weather variables and cryptosporidiosis in Australia. This paper examines the potential impact of weather variability on the transmission of cryptosporidiosis and explores the possibility of developing an empirical forecast system. Data on weather variables, notified cryptosporidiosis cases, and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics for the period of January 1, 1996-December 31, 2004, respectively. Time series Poisson regression and seasonal auto-regression integrated moving average (SARIMA) models were performed to examine the potential impact of weather variability on the transmission of cryptosporidiosis. Both the time series Poisson regression and SARIMA models show that seasonal and monthly maximum temperature at a prior moving average of 1 and 3 months were significantly associated with cryptosporidiosis disease. It suggests that there may be 50 more cases a year for an increase of 1 degrees C maximum temperature on average in Brisbane. Model assessments indicated that the SARIMA model had better predictive ability than the Poisson regression model (SARIMA: root mean square error (RMSE): 0.40, Akaike information criterion (AIC): -12.53; Poisson regression: RMSE: 0.54, AIC: -2.84). Furthermore, the analysis of residuals shows that the time series Poisson regression appeared to violate a modeling assumption, in that residual autocorrelation persisted. The results of this study suggest that weather variability (particularly maximum temperature) may have played a significant role in the transmission of cryptosporidiosis. A SARIMA model may be a better predictive model than a Poisson regression model in the assessment of the relationship between weather variability and the incidence of cryptosporidiosis.

  11. A comparison of multiple indicator kriging and area-to-point Poisson kriging for mapping patterns of herbivore species abundance in Kruger National Park, South Africa

    PubMed Central

    Kerry, Ruth; Goovaerts, Pierre; Smit, Izak P.J.; Ingram, Ben R.

    2015-01-01

    Kruger National Park (KNP), South Africa, provides protected habitats for the unique animals of the African savannah. For the past 40 years, annual aerial surveys of herbivores have been conducted to aid management decisions based on (1) the spatial distribution of species throughout the park and (2) total species populations in a year. The surveys are extremely time consuming and costly. For many years, the whole park was surveyed, but in 1998 a transect survey approach was adopted. This is cheaper and less time consuming but leaves gaps in the data spatially. Also the distance method currently employed by the park only gives estimates of total species populations but not their spatial distribution. We compare the ability of multiple indicator kriging and area-to-point Poisson kriging to accurately map species distribution in the park. A leave-one-out cross-validation approach indicates that multiple indicator kriging makes poor estimates of the number of animals, particularly the few large counts, as the indicator variograms for such high thresholds are pure nugget. Poisson kriging was applied to the prediction of two types of abundance data: spatial density and proportion of a given species. Both Poisson approaches had standardized mean absolute errors (St. MAEs) of animal counts at least an order of magnitude lower than multiple indicator kriging. The spatial density, Poisson approach (1), gave the lowest St. MAEs for the most abundant species and the proportion, Poisson approach (2), did for the least abundant species. Incorporating environmental data into Poisson approach (2) further reduced St. MAEs. PMID:25729318

  12. A comparison of multiple indicator kriging and area-to-point Poisson kriging for mapping patterns of herbivore species abundance in Kruger National Park, South Africa.

    PubMed

    Kerry, Ruth; Goovaerts, Pierre; Smit, Izak P J; Ingram, Ben R

    Kruger National Park (KNP), South Africa, provides protected habitats for the unique animals of the African savannah. For the past 40 years, annual aerial surveys of herbivores have been conducted to aid management decisions based on (1) the spatial distribution of species throughout the park and (2) total species populations in a year. The surveys are extremely time consuming and costly. For many years, the whole park was surveyed, but in 1998 a transect survey approach was adopted. This is cheaper and less time consuming but leaves gaps in the data spatially. Also the distance method currently employed by the park only gives estimates of total species populations but not their spatial distribution. We compare the ability of multiple indicator kriging and area-to-point Poisson kriging to accurately map species distribution in the park. A leave-one-out cross-validation approach indicates that multiple indicator kriging makes poor estimates of the number of animals, particularly the few large counts, as the indicator variograms for such high thresholds are pure nugget. Poisson kriging was applied to the prediction of two types of abundance data: spatial density and proportion of a given species. Both Poisson approaches had standardized mean absolute errors (St. MAEs) of animal counts at least an order of magnitude lower than multiple indicator kriging. The spatial density, Poisson approach (1), gave the lowest St. MAEs for the most abundant species and the proportion, Poisson approach (2), did for the least abundant species. Incorporating environmental data into Poisson approach (2) further reduced St. MAEs.

  13. A Conway-Maxwell-Poisson (CMP) model to address data dispersion on positron emission tomography.

    PubMed

    Santarelli, Maria Filomena; Della Latta, Daniele; Scipioni, Michele; Positano, Vincenzo; Landini, Luigi

    2016-10-01

    Positron emission tomography (PET) in medicine exploits the properties of positron-emitting unstable nuclei. The pairs of γ- rays emitted after annihilation are revealed by coincidence detectors and stored as projections in a sinogram. It is well known that radioactive decay follows a Poisson distribution; however, deviation from Poisson statistics occurs on PET projection data prior to reconstruction due to physical effects, measurement errors, correction of deadtime, scatter, and random coincidences. A model that describes the statistical behavior of measured and corrected PET data can aid in understanding the statistical nature of the data: it is a prerequisite to develop efficient reconstruction and processing methods and to reduce noise. The deviation from Poisson statistics in PET data could be described by the Conway-Maxwell-Poisson (CMP) distribution model, which is characterized by the centring parameter λ and the dispersion parameter ν, the latter quantifying the deviation from a Poisson distribution model. In particular, the parameter ν allows quantifying over-dispersion (ν<1) or under-dispersion (ν>1) of data. A simple and efficient method for λ and ν parameters estimation is introduced and assessed using Monte Carlo simulation for a wide range of activity values. The application of the method to simulated and experimental PET phantom data demonstrated that the CMP distribution parameters could detect deviation from the Poisson distribution both in raw and corrected PET data. It may be usefully implemented in image reconstruction algorithms and quantitative PET data analysis, especially in low counting emission data, as in dynamic PET data, where the method demonstrated the best accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Quantum chemistry in arbitrary dielectric environments: Theory and implementation of nonequilibrium Poisson boundary conditions and application to compute vertical ionization energies at the air/water interface

    NASA Astrophysics Data System (ADS)

    Coons, Marc P.; Herbert, John M.

    2018-06-01

    Widely used continuum solvation models for electronic structure calculations, including popular polarizable continuum models (PCMs), usually assume that the continuum environment is isotropic and characterized by a scalar dielectric constant, ɛ. This assumption is invalid at a liquid/vapor interface or any other anisotropic solvation environment. To address such scenarios, we introduce a more general formalism based on solution of Poisson's equation for a spatially varying dielectric function, ɛ(r). Inspired by nonequilibrium versions of PCMs, we develop a similar formalism within the context of Poisson's equation that includes the out-of-equilibrium dielectric response that accompanies a sudden change in the electron density of the solute, such as that which occurs in a vertical ionization process. A multigrid solver for Poisson's equation is developed to accommodate the large spatial grids necessary to discretize the three-dimensional electron density. We apply this methodology to compute vertical ionization energies (VIEs) of various solutes at the air/water interface and compare them to VIEs computed in bulk water, finding only very small differences between the two environments. VIEs computed using approximately two solvation shells of explicit water molecules are in excellent agreement with experiment for F-(aq), Cl-(aq), neat liquid water, and the hydrated electron, although errors for Li+(aq) and Na+(aq) are somewhat larger. Nonequilibrium corrections modify VIEs by up to 1.2 eV, relative to models based only on the static dielectric constant, and are therefore essential to obtain agreement with experiment. Given that the experiments (liquid microjet photoelectron spectroscopy) may be more sensitive to solutes situated at the air/water interface as compared to those in bulk water, our calculations provide some confidence that these experiments can indeed be interpreted as measurements of VIEs in bulk water.

  15. Quantum chemistry in arbitrary dielectric environments: Theory and implementation of nonequilibrium Poisson boundary conditions and application to compute vertical ionization energies at the air/water interface.

    PubMed

    Coons, Marc P; Herbert, John M

    2018-06-14

    Widely used continuum solvation models for electronic structure calculations, including popular polarizable continuum models (PCMs), usually assume that the continuum environment is isotropic and characterized by a scalar dielectric constant, ε. This assumption is invalid at a liquid/vapor interface or any other anisotropic solvation environment. To address such scenarios, we introduce a more general formalism based on solution of Poisson's equation for a spatially varying dielectric function, ε(r). Inspired by nonequilibrium versions of PCMs, we develop a similar formalism within the context of Poisson's equation that includes the out-of-equilibrium dielectric response that accompanies a sudden change in the electron density of the solute, such as that which occurs in a vertical ionization process. A multigrid solver for Poisson's equation is developed to accommodate the large spatial grids necessary to discretize the three-dimensional electron density. We apply this methodology to compute vertical ionization energies (VIEs) of various solutes at the air/water interface and compare them to VIEs computed in bulk water, finding only very small differences between the two environments. VIEs computed using approximately two solvation shells of explicit water molecules are in excellent agreement with experiment for F - (aq), Cl - (aq), neat liquid water, and the hydrated electron, although errors for Li + (aq) and Na + (aq) are somewhat larger. Nonequilibrium corrections modify VIEs by up to 1.2 eV, relative to models based only on the static dielectric constant, and are therefore essential to obtain agreement with experiment. Given that the experiments (liquid microjet photoelectron spectroscopy) may be more sensitive to solutes situated at the air/water interface as compared to those in bulk water, our calculations provide some confidence that these experiments can indeed be interpreted as measurements of VIEs in bulk water.

  16. Quantization of Poisson Manifolds from the Integrability of the Modular Function

    NASA Astrophysics Data System (ADS)

    Bonechi, F.; Ciccoli, N.; Qiu, J.; Tarlini, M.

    2014-10-01

    We discuss a framework for quantizing a Poisson manifold via the quantization of its symplectic groupoid, combining the tools of geometric quantization with the results of Renault's theory of groupoid C*-algebras. This setting allows very singular polarizations. In particular, we consider the case when the modular function is multiplicatively integrable, i.e., when the space of leaves of the polarization inherits a groupoid structure. If suitable regularity conditions are satisfied, then one can define the quantum algebra as the convolution algebra of the subgroupoid of leaves satisfying the Bohr-Sommerfeld conditions. We apply this procedure to the case of a family of Poisson structures on , seen as Poisson homogeneous spaces of the standard Poisson-Lie group SU( n + 1). We show that a bihamiltonian system on defines a multiplicative integrable model on the symplectic groupoid; we compute the Bohr-Sommerfeld groupoid and show that it satisfies the needed properties for applying Renault theory. We recover and extend Sheu's description of quantum homogeneous spaces as groupoid C*-algebras.

  17. Modification of Poisson Distribution in Radioactive Particle Counting.

    ERIC Educational Resources Information Center

    Drotter, Michael T.

    This paper focuses on radioactive practicle counting statistics in laboratory and field applications, intended to aid the Health Physics technician's understanding of the effect of indeterminant errors on radioactive particle counting. It indicates that although the statistical analysis of radioactive disintegration is best described by a Poisson…

  18. Detecting isotopic ratio outliers

    NASA Astrophysics Data System (ADS)

    Bayne, C. K.; Smith, D. H.

    An alternative method is proposed for improving isotopic ratio estimates. This method mathematically models pulse-count data and uses iterative reweighted Poisson regression to estimate model parameters to calculate the isotopic ratios. This computer-oriented approach provides theoretically better methods than conventional techniques to establish error limits and to identify outliers.

  19. Acoustic Inverse Scattering for Breast Cancer Microcalcification Detection. Addendum

    DTIC Science & Technology

    2011-12-01

    the center. To conserve space, few are shown here. A graph comparing the spatial location and the error in reconstruction will follow...following graphs show the error in reconstruction as a function of position of the object along the x-axis, y-axis and the diagonal in the fourth quadrant of...the well-known Kirchhoff – Poisson formulas (see, e.g., Refs. [33,34]) allow one to rep- resent the solution p(x,t) in terms of the spherical means

  20. Optimal weighting in fNL constraints from large scale structure in an idealised case

    NASA Astrophysics Data System (ADS)

    Slosar, Anže

    2009-03-01

    We consider the problem of optimal weighting of tracers of structure for the purpose of constraining the non-Gaussianity parameter fNL. We work within the Fisher matrix formalism expanded around fiducial model with fNL = 0 and make several simplifying assumptions. By slicing a general sample into infinitely many samples with different biases, we derive the analytic expression for the relevant Fisher matrix element. We next consider weighting schemes that construct two effective samples from a single sample of tracers with a continuously varying bias. We show that a particularly simple ansatz for weighting functions can recover all information about fNL in the initial sample that is recoverable using a given bias observable and that simple division into two equal samples is considerably suboptimal when sampling of modes is good, but only marginally suboptimal in the limit where Poisson errors dominate.

  1. Statistical model for speckle pattern optimization.

    PubMed

    Su, Yong; Zhang, Qingchuan; Gao, Zeren

    2017-11-27

    Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.

  2. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  3. Elasticity of α-Cristobalite: A Silicon Dioxide with a Negative Poisson's Ratio

    NASA Astrophysics Data System (ADS)

    Yeganeh-Haeri, Amir; Weidner, Donald J.; Parise, John B.

    1992-07-01

    Laser Brillouin spectroscopy was used to determine the adiabatic single-crystal elastic stiffness coefficients of silicon dioxide (SiO_2) in the α-cristobalite structure. This SiO_2 polymorph, unlike other silicas and silicates, exhibits a negative Poisson's ratio; α-cristobalite contracts laterally when compressed and expands laterally when stretched. Tensorial analysis of the elastic coefficients shows that Poisson's ratio reaches a maximum value of -0.5 in some directions, whereas averaged values for the single-phased aggregate yield a Poisson's ratio of -0.16.

  4. A System of Poisson Equations for a Nonconstant Varadhan Functional on a Finite State Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavazos-Cadena, Rolando; Hernandez-Hernandez, Daniel

    2006-01-15

    Given a discrete-time Markov chain with finite state space and a stationary transition matrix, a system of 'local' Poisson equations characterizing the (exponential) Varadhan's functional J(.) is given. The main results, which are derived for an arbitrary transition structure so that J(.) may be nonconstant, are as follows: (i) Any solution to the local Poisson equations immediately renders Varadhan's functional, and (ii) a solution of the system always exist. The proof of this latter result is constructive and suggests a method to solve the local Poisson equations.

  5. MUSIC: MUlti-Scale Initial Conditions

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Abel, Tom

    2013-11-01

    MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.

  6. Poissonian renormalizations, exponentials, and power laws.

    PubMed

    Eliazar, Iddo

    2013-05-01

    This paper presents a comprehensive "renormalization study" of Poisson processes governed by exponential and power-law intensities. These Poisson processes are of fundamental importance, as they constitute the very bedrock of the universal extreme-value laws of Gumbel, Fréchet, and Weibull. Applying the method of Poissonian renormalization we analyze the emergence of these Poisson processes, unveil their intrinsic dynamical structures, determine their domains of attraction, and characterize their structural phase transitions. These structural phase transitions are shown to be governed by uniform and harmonic intensities, to have universal domains of attraction, to uniquely display intrinsic invariance, and to be intimately connected to "white noise" and to "1/f noise." Thus, we establish a Poissonian explanation to the omnipresence of white and 1/f noises.

  7. On the Dequantization of Fedosov's Deformation Quantization

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    2003-08-01

    To each natural deformation quantization on a Poisson manifold M we associate a Poisson morphism from the formal neighborhood of the zero section of the cotangent bundle to M to the formal neighborhood of the diagonal of the product M x M~, where M~ is a copy of M with the opposite Poisson structure. We call it dequantization of the natural deformation quantization. Then we "dequantize" Fedosov's quantization.

  8. An investigation of error correcting techniques for OMV and AXAF

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1991-01-01

    The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.

  9. Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.

    PubMed

    Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos

    2013-11-04

    In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.

  10. Anisotropic norm-oriented mesh adaptation for a Poisson problem

    NASA Astrophysics Data System (ADS)

    Brèthes, Gautier; Dervieux, Alain

    2016-10-01

    We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.

  11. IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erwin, Peter; Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München

    2015-02-01

    I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithmsmore » include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.« less

  12. Monitoring Poisson observations using combined applications of Shewhart and EWMA charts

    NASA Astrophysics Data System (ADS)

    Abujiya, Mu'azu Ramat

    2017-11-01

    The Shewhart and exponentially weighted moving average (EWMA) charts for nonconformities are the most widely used procedures of choice for monitoring Poisson observations in modern industries. Individually, the Shewhart EWMA charts are only sensitive to large and small shifts, respectively. To enhance the detection abilities of the two schemes in monitoring all kinds of shifts in Poisson count data, this study examines the performance of combined applications of the Shewhart, and EWMA Poisson control charts. Furthermore, the study proposes modifications based on well-structured statistical data collection technique, ranked set sampling (RSS), to detect shifts in the mean of a Poisson process more quickly. The relative performance of the proposed Shewhart-EWMA Poisson location charts is evaluated in terms of the average run length (ARL), standard deviation of the run length (SDRL), median run length (MRL), average ratio ARL (ARARL), average extra quadratic loss (AEQL) and performance comparison index (PCI). Consequently, all the new Poisson control charts based on RSS method are generally more superior than most of the existing schemes for monitoring Poisson processes. The use of these combined Shewhart-EWMA Poisson charts is illustrated with an example to demonstrate the practical implementation of the design procedure.

  13. p-brane actions and higher Roytenberg brackets

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Schupp, Peter; Vysoký, Jan

    2013-02-01

    Motivated by the quest to understand the analog of non-geometric flux compactification in the context of M-theory, we study higher dimensional analogs of generalized Poisson sigma models and corresponding dual string and p-brane models. We find that higher generalizations of the algebraic structures due to Dorfman, Roytenberg and Courant play an important role and establish their relation to Nambu-Poisson structures.

  14. Indentability of conventional and negative Poisson's ratio foams

    NASA Technical Reports Server (NTRS)

    Lakes, R. S.; Elms, K.

    1992-01-01

    The indentation resistance of foams, both of conventional structure and of reentrant structure giving rise to negative Poisson's ratio, is studied using holographic interferometry. In holographic indentation tests, reentrant foams had higher yield strength and lower stiffness than conventional foams of the same original relative density. Calculated energy absorption for dynamic impact is considerably higher for reentrant foam than conventional foam.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Yang; Xiao, Jianyuan; Zhang, Ruili

    Hamiltonian time integrators for the Vlasov-Maxwell equations are developed by a Hamiltonian splitting technique. The Hamiltonian functional is split into five parts, which produces five exactly solvable subsystems. Each subsystem is a Hamiltonian system equipped with the Morrison-Marsden-Weinstein Poisson bracket. Compositions of the exact solutions provide Poisson structure preserving/Hamiltonian methods of arbitrary high order for the Vlasov-Maxwell equations. They are then accurate and conservative over a long time because of the Poisson-preserving nature.

  16. Four-dimensional gravity as an almost-Poisson system

    NASA Astrophysics Data System (ADS)

    Ita, Eyo Eyo

    2015-04-01

    In this paper, we examine the phase space structure of a noncanonical formulation of four-dimensional gravity referred to as the Instanton representation of Plebanski gravity (IRPG). The typical Hamiltonian (symplectic) approach leads to an obstruction to the definition of a symplectic structure on the full phase space of the IRPG. We circumvent this obstruction, using the Lagrange equations of motion, to find the appropriate generalization of the Poisson bracket. It is shown that the IRPG does not support a Poisson bracket except on the vector constraint surface. Yet there exists a fundamental bilinear operation on its phase space which produces the correct equations of motion and induces the correct transformation properties of the basic fields. This bilinear operation is known as the almost-Poisson bracket, which fails to satisfy the Jacobi identity and in this case also the condition of antisymmetry. We place these results into the overall context of nonsymplectic systems.

  17. Poissonian renormalizations, exponentials, and power laws

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2013-05-01

    This paper presents a comprehensive “renormalization study” of Poisson processes governed by exponential and power-law intensities. These Poisson processes are of fundamental importance, as they constitute the very bedrock of the universal extreme-value laws of Gumbel, Fréchet, and Weibull. Applying the method of Poissonian renormalization we analyze the emergence of these Poisson processes, unveil their intrinsic dynamical structures, determine their domains of attraction, and characterize their structural phase transitions. These structural phase transitions are shown to be governed by uniform and harmonic intensities, to have universal domains of attraction, to uniquely display intrinsic invariance, and to be intimately connected to “white noise” and to “1/f noise.” Thus, we establish a Poissonian explanation to the omnipresence of white and 1/f noises.

  18. Symmetries of hyper-Kähler (or Poisson gauge field) hierarchy

    NASA Astrophysics Data System (ADS)

    Takasaki, K.

    1990-08-01

    Symmetry properties of the space of complex (or formal) hyper-Kähler metrics are studied in the language of hyper-Kähler hierarchies. The construction of finite symmetries is analogous to the theory of Riemann-Hilbert transformations, loop group elements now taking values in a (pseudo-) group of canonical transformations of a simplectic manifold. In spite of their highly nonlinear and involved nature, infinitesimal expressions of these symmetries are shown to have a rather simple form. These infinitesimal transformations are extended to the Plebanski key functions to give rise to a nonlinear realization of a Poisson loop algebra. The Poisson algebra structure turns out to originate in a contact structure behind a set of symplectic structures inherent in the hyper-Kähler hierarchy. Possible relations to membrane theory are briefly discussed.

  19. Stochastic Models of Quality Control on Test Misgrading.

    ERIC Educational Resources Information Center

    Wang, Jianjun

    Stochastic models are developed in this article to examine the rate of test misgrading in educational and psychological measurement. The estimation of inadvertent grading errors can serve as a basis for quality control in measurement. Limitations of traditional Poisson models have been reviewed to highlight the need to introduce new models using…

  20. Parallel Cartesian grid refinement for 3D complex flow simulations

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2013-11-01

    A second order accurate method for discretizing the Navier-Stokes equations on 3D unstructured Cartesian grids is presented. Although the grid generator is based on the oct-tree hierarchical method, fully unstructured data-structure is adopted enabling robust calculations for incompressible flows, avoiding both the need of synchronization of the solution between different levels of refinement and usage of prolongation/restriction operators. The current solver implements a hybrid staggered/non-staggered grid layout, employing the implicit fractional step method to satisfy the continuity equation. The pressure-Poisson equation is discretized by using a novel second order fully implicit scheme for unstructured Cartesian grids and solved using an efficient Krylov subspace solver. The momentum equation is also discretized with second order accuracy and the high performance Newton-Krylov method is used for integrating them in time. Neumann and Dirichlet conditions are used to validate the Poisson solver against analytical functions and grid refinement results to a significant reduction of the solution error. The effectiveness of the fractional step method results in the stability of the overall algorithm and enables the performance of accurate multi-resolution real life simulations. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482.

  1. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.; Marino, J. T., Jr.

    1974-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.

  2. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.

    1975-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.

  3. Performance of cellular frequency-hopped spread-spectrum radio networks

    NASA Astrophysics Data System (ADS)

    Gluck, Jeffrey W.; Geraniotis, Evaggelos

    1989-10-01

    Multiple access interference is characterized for cellular mobile networks, in which users are assumed to be Poisson-distributed in the plane and employ frequency-hopped spread-spectrum signaling with transmitter-oriented assignment of frequency-hopping patterns. Exact expressions for the bit error probabilities are derived for binary coherently demodulated systems without coding. Approximations for the packet error probability are derived for coherent and noncoherent systems and these approximations are applied when forward-error-control coding is employed. In all cases, the effects of varying interference power are accurately taken into account according to some propagation law. Numerical results are given in terms of bit error probability for the exact case and throughput for the approximate analyses. Comparisons are made with previously derived bounds and it is shown that these tend to be very pessimistic.

  4. Isobaric Reconstruction of the Baryonic Acoustic Oscillation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Yu, Hao-Ran; Zhu, Hong-Ming; Yu, Yu; Pan, Qiaoyin; Pen, Ue-Li

    2017-06-01

    In this Letter, we report a significant recovery of the linear baryonic acoustic oscillation (BAO) signature by applying the isobaric reconstruction algorithm to the nonlinear matter density field. Assuming only the longitudinal component of the displacement being cosmologically relevant, this algorithm iteratively solves the coordinate transform between the Lagrangian and Eulerian frames without requiring any specific knowledge of the dynamics. For dark matter field, it produces the nonlinear displacement potential with very high fidelity. The reconstruction error at the pixel level is within a few percent and is caused only by the emergence of the transverse component after the shell-crossing. As it circumvents the strongest nonlinearity of the density evolution, the reconstructed field is well described by linear theory and immune from the bulk-flow smearing of the BAO signature. Therefore, this algorithm could significantly improve the measurement accuracy of the sound horizon scale s. For a perfect large-scale structure survey at redshift zero without Poisson or instrumental noise, the fractional error {{Δ }}s/s is reduced by a factor of ˜2.7, very close to the ideal limit with the linear power spectrum and Gaussian covariance matrix.

  5. Currie detection limits in gamma-ray spectroscopy.

    PubMed

    De Geer, Lars-Erik

    2004-01-01

    Currie Hypothesis testing is applied to gamma-ray spectral data, where an optimum part of the peak is used and the background is considered well known from nearby channels. With this, the risk of making Type I errors is about 100 times lower than commonly assumed. A programme, PeakMaker, produces random peaks with given characteristics on the screen and calculations are done to facilitate a full use of Poisson statistics in spectrum analyses. SHORT TECHNICAL NOTE SUMMARY: The Currie decision limit concept applied to spectral data is reinterpreted, which gives better consistency between the selected error risk and the observed error rates. A PeakMaker program is described and the few count problem is analyzed.

  6. Models for patients' recruitment in clinical trials and sensitivity analysis.

    PubMed

    Mijoule, Guillaume; Savy, Stéphanie; Savy, Nicolas

    2012-07-20

    Taking a decision on the feasibility and estimating the duration of patients' recruitment in a clinical trial are very important but very hard questions to answer, mainly because of the huge variability of the system. The more elaborated works on this topic are those of Anisimov and co-authors, where they investigate modelling of the enrolment period by using Gamma-Poisson processes, which allows to develop statistical tools that can help the manager of the clinical trial to answer these questions and thus help him to plan the trial. The main idea is to consider an ongoing study at an intermediate time, denoted t(1). Data collected on [0,t(1)] allow to calibrate the parameters of the model, which are then used to make predictions on what will happen after t(1). This method allows us to estimate the probability of ending the trial on time and give possible corrective actions to the trial manager especially regarding how many centres have to be open to finish on time. In this paper, we investigate a Pareto-Poisson model, which we compare with the Gamma-Poisson one. We will discuss the accuracy of the estimation of the parameters and compare the models on a set of real case data. We make the comparison on various criteria : the expected recruitment duration, the quality of fitting to the data and its sensitivity to parameter errors. We discuss the influence of the centres opening dates on the estimation of the duration. This is a very important question to deal with in the setting of our data set. In fact, these dates are not known. For this discussion, we consider a uniformly distributed approach. Finally, we study the sensitivity of the expected duration of the trial with respect to the parameters of the model : we calculate to what extent an error on the estimation of the parameters generates an error in the prediction of the duration.

  7. Accounting for rate instability and spatial patterns in the boundary analysis of cancer mortality maps

    PubMed Central

    Goovaerts, Pierre

    2006-01-01

    Boundary analysis of cancer maps may highlight areas where causative exposures change through geographic space, the presence of local populations with distinct cancer incidences, or the impact of different cancer control methods. Too often, such analysis ignores the spatial pattern of incidence or mortality rates and overlooks the fact that rates computed from sparsely populated geographic entities can be very unreliable. This paper proposes a new methodology that accounts for the uncertainty and spatial correlation of rate data in the detection of significant edges between adjacent entities or polygons. Poisson kriging is first used to estimate the risk value and the associated standard error within each polygon, accounting for the population size and the risk semivariogram computed from raw rates. The boundary statistic is then defined as half the absolute difference between kriged risks. Its reference distribution, under the null hypothesis of no boundary, is derived through the generation of multiple realizations of the spatial distribution of cancer risk values. This paper presents three types of neutral models generated using methods of increasing complexity: the common random shuffle of estimated risk values, a spatial re-ordering of these risks, or p-field simulation that accounts for the population size within each polygon. The approach is illustrated using age-adjusted pancreatic cancer mortality rates for white females in 295 US counties of the Northeast (1970–1994). Simulation studies demonstrate that Poisson kriging yields more accurate estimates of the cancer risk and how its value changes between polygons (i.e. boundary statistic), relatively to the use of raw rates or local empirical Bayes smoother. When used in conjunction with spatial neutral models generated by p-field simulation, the boundary analysis based on Poisson kriging estimates minimizes the proportion of type I errors (i.e. edges wrongly declared significant) while the frequency of these errors is predicted well by the p-value of the statistical test. PMID:19023455

  8. Separate and unequal: Structural racism and infant mortality in the US.

    PubMed

    Wallace, Maeve; Crear-Perry, Joia; Richardson, Lisa; Tarver, Meshawn; Theall, Katherine

    2017-05-01

    We examined associations between state-level measures of structural racism and infant mortality among black and white populations across the US. Overall and race-specific infant mortality rates in each state were calculated from national linked birth and infant death records from 2010 to 2013. Structural racism in each state was characterized by racial inequity (ratio of black to white population estimates) in educational attainment, median household income, employment, imprisonment, and juvenile custody. Poisson regression with robust standard errors estimated infant mortality rate ratios (RR) and 95% confidence intervals (CI) associated with an IQR increase in indicators of structural racism overall and separately within black and white populations. Across all states, increasing racial inequity in unemployment was associated with a 5% increase in black infant mortality (RR=1.05, 95% CI=1.01, 1.10). Decreasing racial inequity in education was associated with an almost 10% reduction in the black infant mortality rate (RR=0.92, 95% CI=0.85, 0.99). None of the structural racism measures were significantly associated with infant mortality among whites. Structural racism may contribute to the persisting racial inequity in infant mortality. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Hamiltonian approach to Ehrenfest expectation values and Gaussian quantum states

    PubMed Central

    Bonet-Luz, Esther

    2016-01-01

    The dynamics of quantum expectation values is considered in a geometric setting. First, expectation values of the canonical observables are shown to be equivariant momentum maps for the action of the Heisenberg group on quantum states. Then, the Hamiltonian structure of Ehrenfest’s theorem is shown to be Lie–Poisson for a semidirect-product Lie group, named the Ehrenfest group. The underlying Poisson structure produces classical and quantum mechanics as special limit cases. In addition, quantum dynamics is expressed in the frame of the expectation values, in which the latter undergo canonical Hamiltonian motion. In the case of Gaussian states, expectation values dynamics couples to second-order moments, which also enjoy a momentum map structure. Eventually, Gaussian states are shown to possess a Lie–Poisson structure associated with another semidirect-product group, which is called the Jacobi group. This structure produces the energy-conserving variant of a class of Gaussian moment models that have previously appeared in the chemical physics literature. PMID:27279764

  10. A Novel Method for Preparing Auxetic Foam from Closed-cell Polymer Foam Based on Steam Penetration and Condensation (SPC) Process.

    PubMed

    Fan, Donglei; Li, Minggang; Qiu, Jian; Xing, Haiping; Jiang, Zhiwei; Tang, Tao

    2018-05-31

    Auxetic materials are a class of materials possessing negative Poisson's ratio. Here we establish a novel method for preparing auxetic foam from closed-cell polymer foam based on steam penetration and condensation (SPC) process. Using polyethylene (PE) closed-cell foam as an example, the resultant foams treated by SPC process present negative Poisson's ratio during stretching and compression testing. The effect of steam-treated temperature and time on the conversion efficiency of negative Poisson's ratio foam is investigated, and the mechanism of SPC method for forming re-entrant structure is discussed. The results indicate that the presence of enough steam within the cells is a critical factor for the negative Poisson's ratio conversion in the SPC process. The pressure difference caused by steam condensation is the driving force for the conversion from conventional closed-cell foam to the negative Poisson's ratio foam. Furthermore, the applicability of SPC process for fabricating auxetic foam is studied by replacing PE foam by polyvinyl chloride (PVC) foam with closed-cell structure or replacing water steam by ethanol steam. The results verify the universality of SPC process for fabricating auxetic foams from conventional foams with closed-cell structure. In addition, we explored potential application of the obtained auxetic foams by SPC process in the fabrication of shape memory polymer materials.

  11. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.; ,

    1985-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.

  12. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.

    PubMed

    Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

  13. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisicaro, G., E-mail: giuseppe.fisicaro@unibas.ch; Goedecker, S.; Genovese, L.

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and themore » linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.« less

  14. Massively Parallel Solution of Poisson Equation on Coarse Grain MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Fijany, A.; Weinberger, D.; Roosta, R.; Gulati, S.

    1998-01-01

    In this paper a new algorithm, designated as Fast Invariant Imbedding algorithm, for solution of Poisson equation on vector and massively parallel MIMD architectures is presented. This algorithm achieves the same optimal computational efficiency as other Fast Poisson solvers while offering a much better structure for vector and parallel implementation. Our implementation on the Intel Delta and Paragon shows that a speedup of over two orders of magnitude can be achieved even for moderate size problems.

  15. Indentability of conventional and negative Poisson's ratio foams

    NASA Technical Reports Server (NTRS)

    Lakes, R. S.; Elms, K.

    1992-01-01

    The indentation resistance of foams, both of conventional structure and of re-entrant structure giving rise to negative Poisson's ratio, is studied using holographic interferometry. In holographic indentation tests, re-entrant foams had higher yield strengths sigma(sub y) and lower stiffness E than conventional foams of the same original relative density. Calculated energy absorption for dynamic impact is considerably higher for re-entrant foam than conventional foam.

  16. A new multivariate zero-adjusted Poisson model with applications to biomedicine.

    PubMed

    Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen

    2018-05-25

    Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Poisson's Ratio and Auxetic Properties of Natural Rocks

    NASA Astrophysics Data System (ADS)

    Ji, Shaocheng; Li, Le; Motra, Hem Bahadur; Wuttke, Frank; Sun, Shengsi; Michibayashi, Katsuyoshi; Salisbury, Matthew H.

    2018-02-01

    Here we provide an appraisal of the Poisson's ratios (υ) for natural elements, common oxides, silicate minerals, and rocks with the purpose of searching for naturally auxetic materials. The Poisson's ratios of equivalently isotropic polycrystalline aggregates were calculated from dynamically measured elastic properties. Alpha-cristobalite is currently the only known naturally occurring mineral that has exclusively negative υ values at 20-1,500°C. Quartz and potentially berlinite (AlPO4) display auxetic behavior in the vicinity of their α-β structure transition. None of the crystalline igneous and metamorphic rocks (e.g., amphibolite, gabbro, granite, peridotite, and schist) display auxetic behavior at pressures of >5 MPa and room temperature. Our experimental measurements showed that quartz-rich sedimentary rocks (i.e., sandstone and siltstone) are most likely to be the only rocks with negative Poisson's ratios at low confining pressures (≤200 MPa) because their main constituent mineral, α-quartz, already has extremely low Poisson's ratio (υ = 0.08) and they contain microcracks, micropores, and secondary minerals. This finding may provide a new explanation for formation of dome-and-basin structures in quartz-rich sedimentary rocks in response to a horizontal compressional stress in the upper crust.

  18. Coupling finite element and spectral methods: First results

    NASA Technical Reports Server (NTRS)

    Bernardi, Christine; Debit, Naima; Maday, Yvon

    1987-01-01

    A Poisson equation on a rectangular domain is solved by coupling two methods: the domain is divided in two squares, a finite element approximation is used on the first square and a spectral discretization is used on the second one. Two kinds of matching conditions on the interface are presented and compared. In both cases, error estimates are proved.

  19. Hyperbolically Patterned 3D Graphene Metamaterial with Negative Poisson's Ratio and Superelasticity.

    PubMed

    Zhang, Qiangqiang; Xu, Xiang; Lin, Dong; Chen, Wenli; Xiong, Guoping; Yu, Yikang; Fisher, Timothy S; Li, Hui

    2016-03-16

    A hyperbolically patterned 3D graphene metamaterial (GM) with negative Poisson's ratio and superelasticity is highlighted. It is synthesized by a modified hydrothermal approach and subsequent oriented freeze-casting strategy. GM presents a tunable Poisson's ratio by adjusting the structural porosity, macroscopic aspect ratio (L/D), and freeze-casting conditions. Such a GM suggests promising applications as soft actuators, sensors, robust shock absorbers, and environmental remediation. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Computational wave dynamics for innovative design of coastal structures

    PubMed Central

    GOTOH, Hitoshi; OKAYASU, Akio

    2017-01-01

    For innovative designs of coastal structures, Numerical Wave Flumes (NWFs), which are solvers of Navier-Stokes equation for free-surface flows, are key tools. In this article, various methods and techniques for NWFs are overviewed. In the former half, key techniques of NWFs, namely the interface capturing (MAC, VOF, C-CUP) and significance of NWFs in comparison with the conventional wave models are described. In the latter part of this article, recent improvements of the particle method are shown as one of cores of NWFs. Methods for attenuating unphysical pressure fluctuation and improving accuracy, such as CMPS method for momentum conservation, Higher-order Source of Poisson Pressure Equation (PPE), Higher-order Laplacian, Error-Compensating Source in PPE, and Gradient Correction for ensuring Taylor-series consistency, are reviewed briefly. Finally, the latest new frontier of the accurate particle method, including Dynamic Stabilization for providing minimum-required artificial repulsive force to improve stability of computation, and Space Potential Particle for describing the exact free-surface boundary condition, is described. PMID:29021506

  1. Adaptive Detector Arrays for Optical Communications Receivers

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V.; Srinivasan, M.

    2000-01-01

    The structure of an optimal adaptive array receiver for ground-based optical communications is described and its performance investigated. Kolmogorov phase screen simulations are used to model the sample functions of the focal-plane signal distribution due to turbulence and to generate realistic spatial distributions of the received optical field. This novel array detector concept reduces interference from background radiation by effectively assigning higher confidence levels at each instant of time to those detector elements that contain significant signal energy and suppressing those that do not. A simpler suboptimum structure that replaces the continuous weighting function of the optimal receiver by a hard decision on the selection of the signal detector elements also is described and evaluated. Approximations and bounds to the error probability are derived and compared with the exact calculations and receiver simulation results. It is shown that, for photon-counting receivers observing Poisson-distributed signals, performance improvements of approximately 5 dB can be obtained over conventional single-detector photon-counting receivers, when operating in high background environments.

  2. Generalized derivation extensions of 3-Lie algebras and corresponding Nambu-Poisson structures

    NASA Astrophysics Data System (ADS)

    Song, Lina; Jiang, Jun

    2018-01-01

    In this paper, we introduce the notion of a generalized derivation on a 3-Lie algebra. We construct a new 3-Lie algebra using a generalized derivation and call it the generalized derivation extension. We show that the corresponding Leibniz algebra on the space of fundamental objects is the double of a matched pair of Leibniz algebras. We also determine the corresponding Nambu-Poisson structures under some conditions.

  3. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    PubMed Central

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEM algorithm is also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein. PMID:21949541

  5. Morphology and linear-elastic moduli of random network solids.

    PubMed

    Nachtrab, Susan; Kapfer, Sebastian C; Arns, Christoph H; Madadi, Mahyar; Mecke, Klaus; Schröder-Turk, Gerd E

    2011-06-17

    The effective linear-elastic moduli of disordered network solids are analyzed by voxel-based finite element calculations. We analyze network solids given by Poisson-Voronoi processes and by the structure of collagen fiber networks imaged by confocal microscopy. The solid volume fraction ϕ is varied by adjusting the fiber radius, while keeping the structural mesh or pore size of the underlying network fixed. For intermediate ϕ, the bulk and shear modulus are approximated by empirical power-laws K(phi)proptophin and G(phi)proptophim with n≈1.4 and m≈1.7. The exponents for the collagen and the Poisson-Voronoi network solids are similar, and are close to the values n=1.22 and m=2.11 found in a previous voxel-based finite element study of Poisson-Voronoi systems with different boundary conditions. However, the exponents of these empirical power-laws are at odds with the analytic values of n=1 and m=2, valid for low-density cellular structures in the limit of thin beams. We propose a functional form for K(ϕ) that models the cross-over from a power-law at low densities to a porous solid at high densities; a fit of the data to this functional form yields the asymptotic exponent n≈1.00, as expected. Further, both the intensity of the Poisson-Voronoi process and the collagen concentration in the samples, both of which alter the typical pore or mesh size, affect the effective moduli only by the resulting change of the solid volume fraction. These findings suggest that a network solid with the structure of the collagen networks can be modeled in quantitative agreement by a Poisson-Voronoi process. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Two-dimensional mesh embedding for Galerkin B-spline methods

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Moser, Robert D.

    1995-01-01

    A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.

  7. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  8. Technical and biological variance structure in mRNA-Seq data: life in the real world

    PubMed Central

    2012-01-01

    Background mRNA expression data from next generation sequencing platforms is obtained in the form of counts per gene or exon. Counts have classically been assumed to follow a Poisson distribution in which the variance is equal to the mean. The Negative Binomial distribution which allows for over-dispersion, i.e., for the variance to be greater than the mean, is commonly used to model count data as well. Results In mRNA-Seq data from 25 subjects, we found technical variation to generally follow a Poisson distribution as has been reported previously and biological variability was over-dispersed relative to the Poisson model. The mean-variance relationship across all genes was quadratic, in keeping with a Negative Binomial (NB) distribution. Over-dispersed Poisson and NB distributional assumptions demonstrated marked improvements in goodness-of-fit (GOF) over the standard Poisson model assumptions, but with evidence of over-fitting in some genes. Modeling of experimental effects improved GOF for high variance genes but increased the over-fitting problem. Conclusions These conclusions will guide development of analytical strategies for accurate modeling of variance structure in these data and sample size determination which in turn will aid in the identification of true biological signals that inform our understanding of biological systems. PMID:22769017

  9. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    NASA Astrophysics Data System (ADS)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-01

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. The algorithm is found to out-perform current leading x-ray inversion algorithms when the error due to counting statistics is high.

  10. Extending the Solvation-Layer Interface Condition Continum Electrostatic Model to a Linearized Poisson-Boltzmann Solvent.

    PubMed

    Molavi Tabrizi, Amirhossein; Goossens, Spencer; Mehdizadeh Rahimi, Ali; Cooper, Christopher D; Knepley, Matthew G; Bardhan, Jaydeep P

    2017-06-13

    We extend the linearized Poisson-Boltzmann (LPB) continuum electrostatic model for molecular solvation to address charge-hydration asymmetry. Our new solvation-layer interface condition (SLIC)/LPB corrects for first-shell response by perturbing the traditional continuum-theory interface conditions at the protein-solvent and the Stern-layer interfaces. We also present a GPU-accelerated treecode implementation capable of simulating large proteins, and our results demonstrate that the new model exhibits significant accuracy improvements over traditional LPB models, while reducing the number of fitting parameters from dozens (atomic radii) to just five parameters, which have physical meanings related to first-shell water behavior at an uncharged interface. In particular, atom radii in the SLIC model are not optimized but uniformly scaled from their Lennard-Jones radii. Compared to explicit-solvent free-energy calculations of individual atoms in small molecules, SLIC/LPB is significantly more accurate than standard parametrizations (RMS error 0.55 kcal/mol for SLIC, compared to RMS error of 3.05 kcal/mol for standard LPB). On parametrizing the electrostatic model with a simple nonpolar component for total molecular solvation free energies, our model predicts octanol/water transfer free energies with an RMS error 1.07 kcal/mol. A more detailed assessment illustrates that standard continuum electrostatic models reproduce total charging free energies via a compensation of significant errors in atomic self-energies; this finding offers a window into improving the accuracy of Generalized-Born theories and other coarse-grained models. Most remarkably, the SLIC model also reproduces positive charging free energies for atoms in hydrophobic groups, whereas standard PB models are unable to generate positive charging free energies regardless of the parametrized radii. The GPU-accelerated solver is freely available online, as is a MATLAB implementation.

  11. Nonlocal Poisson-Fermi double-layer models: Effects of nonuniform ion sizes on double-layer structure

    NASA Astrophysics Data System (ADS)

    Xie, Dexuan; Jiang, Yi

    2018-05-01

    This paper reports a nonuniform ionic size nonlocal Poisson-Fermi double-layer model (nuNPF) and a uniform ionic size nonlocal Poisson-Fermi double-layer model (uNPF) for an electrolyte mixture of multiple ionic species, variable voltages on electrodes, and variable induced charges on boundary segments. The finite element solvers of nuNPF and uNPF are developed and applied to typical double-layer tests defined on a rectangular box, a hollow sphere, and a hollow rectangle with a charged post. Numerical results show that nuNPF can significantly improve the quality of the ionic concentrations and electric fields generated from uNPF, implying that the effect of nonuniform ion sizes is a key consideration in modeling the double-layer structure.

  12. A statistical approach for inferring the 3D structure of the genome.

    PubMed

    Varoquaux, Nelle; Ay, Ferhat; Noble, William Stafford; Vert, Jean-Philippe

    2014-06-15

    Recent technological advances allow the measurement, in a single Hi-C experiment, of the frequencies of physical contacts among pairs of genomic loci at a genome-wide scale. The next challenge is to infer, from the resulting DNA-DNA contact maps, accurate 3D models of how chromosomes fold and fit into the nucleus. Many existing inference methods rely on multidimensional scaling (MDS), in which the pairwise distances of the inferred model are optimized to resemble pairwise distances derived directly from the contact counts. These approaches, however, often optimize a heuristic objective function and require strong assumptions about the biophysics of DNA to transform interaction frequencies to spatial distance, and thereby may lead to incorrect structure reconstruction. We propose a novel approach to infer a consensus 3D structure of a genome from Hi-C data. The method incorporates a statistical model of the contact counts, assuming that the counts between two loci follow a Poisson distribution whose intensity decreases with the physical distances between the loci. The method can automatically adjust the transfer function relating the spatial distance to the Poisson intensity and infer a genome structure that best explains the observed data. We compare two variants of our Poisson method, with or without optimization of the transfer function, to four different MDS-based algorithms-two metric MDS methods using different stress functions, a non-metric version of MDS and ChromSDE, a recently described, advanced MDS method-on a wide range of simulated datasets. We demonstrate that the Poisson models reconstruct better structures than all MDS-based methods, particularly at low coverage and high resolution, and we highlight the importance of optimizing the transfer function. On publicly available Hi-C data from mouse embryonic stem cells, we show that the Poisson methods lead to more reproducible structures than MDS-based methods when we use data generated using different restriction enzymes, and when we reconstruct structures at different resolutions. A Python implementation of the proposed method is available at http://cbio.ensmp.fr/pastis. © The Author 2014. Published by Oxford University Press.

  13. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

    PubMed

    Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

    2012-01-01

    Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.

  14. Spatial variation of natural radiation and childhood leukaemia incidence in Great Britain.

    PubMed

    Richardson, S; Monfort, C; Green, M; Draper, G; Muirhead, C

    This paper describes an analysis of the geographical variation of childhood leukaemia incidence in Great Britain over a 15 year period in relation to natural radiation (gamma and radon). Data at the level of the 459 district level local authorities in England, Wales and regional districts in Scotland are analysed in two complementary ways: first, by Poisson regressions with the inclusion of environmental covariates and a smooth spatial structure; secondly, by a hierarchical Bayesian model in which extra-Poisson variability is modelled explicitly in terms of spatial and non-spatial components. From this analysis, we deduce a strong indication that a main part of the variability is accounted for by a local neighbourhood 'clustering' structure. This structure is furthermore relatively stable over the 15 year period for the lymphocytic leukaemias which make up the majority of observed cases. We found no evidence of a positive association of childhood leukaemia incidence with outdoor or indoor gamma radiation levels. There is no consistent evidence of any association with radon levels. Indeed, in the Poisson regressions, a significant positive association was only observed for one 5-year period, a result which is not compatible with a stable environmental effect. Moreover, this positive association became clearly non-significant when over-dispersion relative to the Poisson distribution was taken into account.

  15. SL(2,C) gravity on noncommutative space with Poisson structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao Yangang; Zhang Shaojun

    2010-10-15

    The Einstein's gravity theory can be formulated as an SL(2,C) gauge theory in terms of spinor notations. In this paper, we consider a noncommutative space with the Poisson structure and construct an SL(2,C) formulation of gravity on such a space. Using the covariant coordinate technique, we build a gauge invariant action in which, according to the Seiberg-Witten map, the physical degrees of freedom are expressed in terms of their commutative counterparts up to the first order in noncommutative parameters.

  16. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  17. Soft network materials with isotropic negative Poisson's ratios over large strains.

    PubMed

    Liu, Jianxing; Zhang, Yihui

    2018-01-31

    Auxetic materials with negative Poisson's ratios have important applications across a broad range of engineering areas, such as biomedical devices, aerospace engineering and automotive engineering. A variety of design strategies have been developed to achieve artificial auxetic materials with controllable responses in the Poisson's ratio. The development of designs that can offer isotropic negative Poisson's ratios over large strains can open up new opportunities in emerging biomedical applications, which, however, remains a challenge. Here, we introduce deterministic routes to soft architected materials that can be tailored precisely to yield the values of Poisson's ratio in the range from -1 to 1, in an isotropic manner, with a tunable strain range from 0% to ∼90%. The designs rely on a network construction in a periodic lattice topology, which incorporates zigzag microstructures as building blocks to connect lattice nodes. Combined experimental and theoretical studies on broad classes of network topologies illustrate the wide-ranging utility of these concepts. Quantitative mechanics modeling under both infinitesimal and finite deformations allows the development of a rigorous design algorithm that determines the necessary network geometries to yield target Poisson ratios over desired strain ranges. Demonstrative examples in artificial skin with both the negative Poisson's ratio and the nonlinear stress-strain curve precisely matching those of the cat's skin and in unusual cylindrical structures with engineered Poisson effect and shape memory effect suggest potential applications of these network materials.

  18. Vectorized multigrid Poisson solver for the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Barkai, D.; Brandt, M. A.

    1984-01-01

    The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.

  19. Uncertainties in the cluster-cluster correlation function

    NASA Astrophysics Data System (ADS)

    Ling, E. N.; Frenk, C. S.; Barrow, J. D.

    1986-12-01

    The bootstrap resampling technique is applied to estimate sampling errors and significance levels of the two-point correlation functions determined for a subset of the CfA redshift survey of galaxies and a redshift sample of 104 Abell clusters. The angular correlation function for a sample of 1664 Abell clusters is also calculated. The standard errors in xi(r) for the Abell data are found to be considerably larger than quoted 'Poisson errors'. The best estimate for the ratio of the correlation length of Abell clusters (richness class R greater than or equal to 1, distance class D less than or equal to 4) to that of CfA galaxies is 4.2 + 1.4 or - 1.0 (68 percentile error). The enhancement of cluster clustering over galaxy clustering is statistically significant in the presence of resampling errors. The uncertainties found do not include the effects of possible systematic biases in the galaxy and cluster catalogs and could be regarded as lower bounds on the true uncertainty range.

  20. Quantized Algebras of Functions on Homogeneous Spaces with Poisson Stabilizers

    NASA Astrophysics Data System (ADS)

    Neshveyev, Sergey; Tuset, Lars

    2012-05-01

    Let G be a simply connected semisimple compact Lie group with standard Poisson structure, K a closed Poisson-Lie subgroup, 0 < q < 1. We study a quantization C( G q / K q ) of the algebra of continuous functions on G/ K. Using results of Soibelman and Dijkhuizen-Stokman we classify the irreducible representations of C( G q / K q ) and obtain a composition series for C( G q / K q ). We describe closures of the symplectic leaves of G/ K refining the well-known description in the case of flag manifolds in terms of the Bruhat order. We then show that the same rules describe the topology on the spectrum of C( G q / K q ). Next we show that the family of C*-algebras C( G q / K q ), 0 < q ≤ 1, has a canonical structure of a continuous field of C*-algebras and provides a strict deformation quantization of the Poisson algebra {{C}[G/K]} . Finally, extending a result of Nagy, we show that C( G q / K q ) is canonically KK-equivalent to C( G/ K).

  1. An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.

    PubMed

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-09-01

    Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  2. Relational symplectic groupoid quantization for constant poisson structures

    NASA Astrophysics Data System (ADS)

    Cattaneo, Alberto S.; Moshayedi, Nima; Wernli, Konstantin

    2017-09-01

    As a detailed application of the BV-BFV formalism for the quantization of field theories on manifolds with boundary, this note describes a quantization of the relational symplectic groupoid for a constant Poisson structure. The presence of mixed boundary conditions and the globalization of results are also addressed. In particular, the paper includes an extension to space-times with boundary of some formal geometry considerations in the BV-BFV formalism, and specifically introduces into the BV-BFV framework a "differential" version of the classical and quantum master equations. The quantization constructed in this paper induces Kontsevich's deformation quantization on the underlying Poisson manifold, i.e., the Moyal product, which is known in full details. This allows focussing on the BV-BFV technology and testing it. For the inexperienced reader, this is also a practical and reasonably simple way to learn it.

  3. Distribution-free Inference of Zero-inated Binomial Data for Longitudinal Studies.

    PubMed

    He, H; Wang, W J; Hu, J; Gallop, R; Crits-Christoph, P; Xia, Y L

    2015-10-01

    Count reponses with structural zeros are very common in medical and psychosocial research, especially in alcohol and HIV research, and the zero-inflated poisson (ZIP) and zero-inflated negative binomial (ZINB) models are widely used for modeling such outcomes. However, as alcohol drinking outcomes such as days of drinkings are counts within a given period, their distributions are bounded above by an upper limit (total days in the period) and thus inherently follow a binomial or zero-inflated binomial (ZIB) distribution, rather than a Poisson or zero-inflated Poisson (ZIP) distribution, in the presence of structural zeros. In this paper, we develop a new semiparametric approach for modeling zero-inflated binomial (ZIB)-like count responses for cross-sectional as well as longitudinal data. We illustrate this approach with both simulated and real study data.

  4. At least some errors are randomly generated (Freud was wrong)

    NASA Technical Reports Server (NTRS)

    Sellen, A. J.; Senders, J. W.

    1986-01-01

    An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.

  5. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    DOE PAGES

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-27

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. Finally, the algorithm is found to out-perform current leading x-ray inversion algorithms when the error duemore » to counting statistics is high.« less

  6. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. Finally, the algorithm is found to out-perform current leading x-ray inversion algorithms when the error duemore » to counting statistics is high.« less

  7. Green's function enriched Poisson solver for electrostatics in many-particle systems

    NASA Astrophysics Data System (ADS)

    Sutmann, Godehard

    2016-06-01

    A highly accurate method is presented for the construction of the charge density for the solution of the Poisson equation in particle simulations. The method is based on an operator adjusted source term which can be shown to produce exact results up to numerical precision in the case of a large support of the charge distribution, therefore compensating the discretization error of finite difference schemes. This is achieved by balancing an exact representation of the known Green's function of regularized electrostatic problem with a discretized representation of the Laplace operator. It is shown that the exact calculation of the potential is possible independent of the order of the finite difference scheme but the computational efficiency for higher order methods is found to be superior due to a faster convergence to the exact result as a function of the charge support.

  8. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

    ERIC Educational Resources Information Center

    Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

    2008-01-01

    Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

  9. Invariant Poisson-Nijenhuis structures on Lie groups and classification

    NASA Astrophysics Data System (ADS)

    Ravanpak, Zohreh; Rezaei-Aghdam, Adel; Haghighatdoost, Ghorbanali

    We study right-invariant (respectively, left-invariant) Poisson-Nijenhuis structures (P-N) on a Lie group G and introduce their infinitesimal counterpart, the so-called r-n structures on the corresponding Lie algebra 𝔤. We show that r-n structures can be used to find compatible solutions of the classical Yang-Baxter equation (CYBE). Conversely, two compatible r-matrices from which one is invertible determine an r-n structure. We classify, up to a natural equivalence, all r-matrices and all r-n structures with invertible r on four-dimensional symplectic real Lie algebras. The result is applied to show that a number of dynamical systems which can be constructed by r-matrices on a phase space whose symmetry group is Lie group a G, can be specifically determined.

  10. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    PubMed

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  11. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance

    PubMed Central

    Poplová, Michaela; Sovka, Pavel

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal. PMID:29216207

  12. Computational prediction of new auxetic materials.

    PubMed

    Dagdelen, John; Montoya, Joseph; de Jong, Maarten; Persson, Kristin

    2017-08-22

    Auxetics comprise a rare family of materials that manifest negative Poisson's ratio, which causes an expansion instead of contraction under tension. Most known homogeneously auxetic materials are porous foams or artificial macrostructures and there are few examples of inorganic materials that exhibit this behavior as polycrystalline solids. It is now possible to accelerate the discovery of materials with target properties, such as auxetics, using high-throughput computations, open databases, and efficient search algorithms. Candidates exhibiting features correlating with auxetic behavior were chosen from the set of more than 67 000 materials in the Materials Project database. Poisson's ratios were derived from the calculated elastic tensor of each material in this reduced set of compounds. We report that this strategy results in the prediction of three previously unidentified homogeneously auxetic materials as well as a number of compounds with a near-zero homogeneous Poisson's ratio, which are here denoted "anepirretic materials".There are very few inorganic materials with auxetic homogenous Poisson's ratio in polycrystalline form. Here authors develop an approach to screening materials databases for target properties such as negative Poisson's ratio by using stability and structural motifs to predict new instances of homogenous auxetic behavior as well as a number of materials with near-zero Poisson's ratio.

  13. Data driven CAN node reliability assessment for manufacturing system

    NASA Astrophysics Data System (ADS)

    Zhang, Leiming; Yuan, Yong; Lei, Yong

    2017-01-01

    The reliability of the Controller Area Network(CAN) is critical to the performance and safety of the system. However, direct bus-off time assessment tools are lacking in practice due to inaccessibility of the node information and the complexity of the node interactions upon errors. In order to measure the mean time to bus-off(MTTB) of all the nodes, a novel data driven node bus-off time assessment method for CAN network is proposed by directly using network error information. First, the corresponding network error event sequence for each node is constructed using multiple-layer network error information. Then, the generalized zero inflated Poisson process(GZIP) model is established for each node based on the error event sequence. Finally, the stochastic model is constructed to predict the MTTB of the node. The accelerated case studies with different error injection rates are conducted on a laboratory network to demonstrate the proposed method, where the network errors are generated by a computer controlled error injection system. Experiment results show that the MTTB of nodes predicted by the proposed method agree well with observations in the case studies. The proposed data driven node time to bus-off assessment method for CAN networks can successfully predict the MTTB of nodes by directly using network error event data.

  14. On the connection between multigrid and cyclic reduction

    NASA Technical Reports Server (NTRS)

    Merriam, M. L.

    1984-01-01

    A technique is shown whereby it is possible to relate a particular multigrid process to cyclic reduction using purely mathematical arguments. This technique suggest methods for solving Poisson's equation in 1-, 2-, or 3-dimensions with Dirichlet or Neumann boundary conditions. In one dimension the method is exact and, in fact, reduces to cyclic reduction. This provides a valuable reference point for understanding multigrid techniques. The particular multigrid process analyzed is referred to here as Approximate Cyclic Reduction (ACR) and is one of a class known as Multigrid Reduction methods in the literature. It involves one approximation with a known error term. It is possible to relate the error term in this approximation with certain eigenvector components of the error. These are sharply reduced in amplitude by classical relaxation techniques. The approximation can thus be made a very good one.

  15. Slits, plates, and Poisson-Boltzmann theory in a local formulation of nonlocal electrostatics

    NASA Astrophysics Data System (ADS)

    Paillusson, Fabien; Blossey, Ralf

    2010-11-01

    Polar liquids like water carry a characteristic nanometric length scale, the correlation length of orientation polarizations. Continuum theories that can capture this feature commonly run under the name of “nonlocal” electrostatics since their dielectric response is characterized by a scale-dependent dielectric function ɛ(q) , where q is the wave vector; the Poisson(-Boltzmann) equation then turns into an integro-differential equation. Recently, “local” formulations have been put forward for these theories and applied to water, solvated ions, and proteins. We review the local formalism and show how it can be applied to a structured liquid in slit and plate geometries, and solve the Poisson-Boltzmann theory for a charged plate in a structured solvent with counterions. Our results establish a coherent picture of the local version of nonlocal electrostatics and show its ease of use when compared to the original formulation.

  16. Infinitesimal deformations of Poisson bi-vectors using the Kontsevich graph calculus

    NASA Astrophysics Data System (ADS)

    Buring, Ricardo; Kiselev, Arthemy V.; Rutten, Nina

    2018-02-01

    Let \\mathscr{P} be a Poisson structure on a finite-dimensional affine real manifold. Can \\mathscr{P} be deformed in such a way that it stays Poisson? The language of Kontsevich graphs provides a universal approach - with respect to all affine Poisson manifolds - to finding a class of solutions to this deformation problem. For that reasoning, several types of graphs are needed. In this paper we outline the algorithms to generate those graphs. The graphs that encode deformations are classified by the number of internal vertices k; for k ≤ 4 we present all solutions of the deformation problem. For k ≥ 5, first reproducing the pentagon-wheel picture suggested at k = 6 by Kontsevich and Willwacher, we construct the heptagon-wheel cocycle that yields a new unique solution without 2-loops and tadpoles at k = 8.

  17. Computational Cosmology at the Bleeding Edge

    NASA Astrophysics Data System (ADS)

    Habib, Salman

    2013-04-01

    Large-area sky surveys are providing a wealth of cosmological information to address the mysteries of dark energy and dark matter. Observational probes based on tracking the formation of cosmic structure are essential to this effort, and rely crucially on N-body simulations that solve the Vlasov-Poisson equation in an expanding Universe. As statistical errors from survey observations continue to shrink, and cosmological probes increase in number and complexity, simulations are entering a new regime in their use as tools for scientific inference. Changes in supercomputer architectures provide another rationale for developing new parallel simulation and analysis capabilities that can scale to computational concurrency levels measured in the millions to billions. In this talk I will outline the motivations behind the development of the HACC (Hardware/Hybrid Accelerated Cosmology Code) extreme-scale cosmological simulation framework and describe its essential features. By exploiting a novel algorithmic structure that allows flexible tuning across diverse computer architectures, including accelerated and many-core systems, HACC has attained a performance of 14 PFlops on the IBM BG/Q Sequoia system at 69% of peak, using more than 1.5 million cores.

  18. Statistical mapping of count survey data

    USGS Publications Warehouse

    Royle, J. Andrew; Link, W.A.; Sauer, J.R.; Scott, J. Michael; Heglund, Patricia J.; Morrison, Michael L.; Haufler, Jonathan B.; Wall, William A.

    2002-01-01

    We apply a Poisson mixed model to the problem of mapping (or predicting) bird relative abundance from counts collected from the North American Breeding Bird Survey (BBS). The model expresses the logarithm of the Poisson mean as a sum of a fixed term (which may depend on habitat variables) and a random effect which accounts for remaining unexplained variation. The random effect is assumed to be spatially correlated, thus providing a more general model than the traditional Poisson regression approach. Consequently, the model is capable of improved prediction when data are autocorrelated. Moreover, formulation of the mapping problem in terms of a statistical model facilitates a wide variety of inference problems which are cumbersome or even impossible using standard methods of mapping. For example, assessment of prediction uncertainty, including the formal comparison of predictions at different locations, or through time, using the model-based prediction variance is straightforward under the Poisson model (not so with many nominally model-free methods). Also, ecologists may generally be interested in quantifying the response of a species to particular habitat covariates or other landscape attributes. Proper accounting for the uncertainty in these estimated effects is crucially dependent on specification of a meaningful statistical model. Finally, the model may be used to aid in sampling design, by modifying the existing sampling plan in a manner which minimizes some variance-based criterion. Model fitting under this model is carried out using a simulation technique known as Markov Chain Monte Carlo. Application of the model is illustrated using Mourning Dove (Zenaida macroura) counts from Pennsylvania BBS routes. We produce both a model-based map depicting relative abundance, and the corresponding map of prediction uncertainty. We briefly address the issue of spatial sampling design under this model. Finally, we close with some discussion of mapping in relation to habitat structure. Although our models were fit in the absence of habitat information, the resulting predictions show a strong inverse relation with a map of forest cover in the state, as expected. Consequently, the results suggest that the correlated random effect in the model is broadly representing ecological variation, and that BBS data may be generally useful for studying bird-habitat relationships, even in the presence of observer errors and other widely recognized deficiencies of the BBS.

  19. Fast and Accurate Poisson Denoising With Trainable Nonlinear Diffusion.

    PubMed

    Feng, Wensen; Qiao, Peng; Chen, Yunjin; Wensen Feng; Peng Qiao; Yunjin Chen; Feng, Wensen; Chen, Yunjin; Qiao, Peng

    2018-06-01

    The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts. However, the straightforward direct gradient descent employed in the original TNRD-based denoising task is not applicable in this paper. To solve this problem, we resort to the proximal gradient descent method. We retrain the model parameters, including the linear filters and influence functions by taking into account the Poisson noise statistics, and end up with a well-trained nonlinear diffusion model specialized for Poisson denoising. The trained model provides strongly competitive results against state-of-the-art approaches, meanwhile bearing the properties of simple structure and high efficiency. Furthermore, our proposed model comes along with an additional advantage, that the diffusion process is well-suited for parallel computation on graphics processing units (GPUs). For images of size , our GPU implementation takes less than 0.1 s to produce state-of-the-art Poisson denoising performance.

  20. Variational tricomplex of a local gauge system, Lagrange structure and weak Poisson bracket

    NASA Astrophysics Data System (ADS)

    Sharapov, A. A.

    2015-09-01

    We introduce the concept of a variational tricomplex, which is applicable both to variational and nonvariational gauge systems. Assigning this tricomplex with an appropriate symplectic structure and a Cauchy foliation, we establish a general correspondence between the Lagrangian and Hamiltonian pictures of one and the same (not necessarily variational) dynamics. In practical terms, this correspondence allows one to construct the generating functional of a weak Poisson structure starting from that of a Lagrange structure. As a byproduct, a covariant procedure is proposed for deriving the classical BRST charge of the BFV formalism by a given BV master action. The general approach is illustrated by the examples of Maxwell’s electrodynamics and chiral bosons in two dimensions.

  1. Effects of learning climate and registered nurse staffing on medication errors.

    PubMed

    Chang, Yunkyung; Mark, Barbara

    2011-01-01

    Despite increasing recognition of the significance of learning from errors, little is known about how learning climate contributes to error reduction. The purpose of this study was to investigate whether learning climate moderates the relationship between error-producing conditions and medication errors. A cross-sectional descriptive study was done using data from 279 nursing units in 146 randomly selected hospitals in the United States. Error-producing conditions included work environment factors (work dynamics and nurse mix), team factors (communication with physicians and nurses' expertise), personal factors (nurses' education and experience), patient factors (age, health status, and previous hospitalization), and medication-related support services. Poisson models with random effects were used with the nursing unit as the unit of analysis. A significant negative relationship was found between learning climate and medication errors. It also moderated the relationship between nurse mix and medication errors: When learning climate was negative, having more registered nurses was associated with fewer medication errors. However, no relationship was found between nurse mix and medication errors at either positive or average levels of learning climate. Learning climate did not moderate the relationship between work dynamics and medication errors. The way nurse mix affects medication errors depends on the level of learning climate. Nursing units with fewer registered nurses and frequent medication errors should examine their learning climate. Future research should be focused on the role of learning climate as related to the relationships between nurse mix and medication errors.

  2. Validation of the Poisson Stochastic Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Zhuravleva, Tatiana; Marshak, Alexander

    2004-01-01

    A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.

  3. Electrostatic potential of B-DNA: effect of interionic correlations.

    PubMed Central

    Gavryushov, S; Zielenkiewicz, P

    1998-01-01

    Modified Poisson-Boltzmann (MPB) equations have been numerically solved to study ionic distributions and mean electrostatic potentials around a macromolecule of arbitrarily complex shape and charge distribution. Results for DNA are compared with those obtained by classical Poisson-Boltzmann (PB) calculations. The comparisons were made for 1:1 and 2:1 electrolytes at ionic strengths up to 1 M. It is found that ion-image charge interactions and interionic correlations, which are neglected by the PB equation, have relatively weak effects on the electrostatic potential at charged groups of the DNA. The PB equation predicts errors in the long-range electrostatic part of the free energy that are only approximately 1.5 kJ/mol per nucleotide even in the case of an asymmetrical electrolyte. In contrast, the spatial correlations between ions drastically affect the electrostatic potential at significant separations from the macromolecule leading to a clearly predicted effect of charge overneutralization. PMID:9826596

  4. Accurate, robust and reliable calculations of Poisson-Boltzmann binding energies

    PubMed Central

    Nguyen, Duc D.; Wang, Bao

    2017-01-01

    Poisson-Boltzmann (PB) model is one of the most popular implicit solvent models in biophysical modeling and computation. The ability of providing accurate and reliable PB estimation of electrostatic solvation free energy, ΔGel, and binding free energy, ΔΔGel, is important to computational biophysics and biochemistry. In this work, we investigate the grid dependence of our PB solver (MIBPB) with SESs for estimating both electrostatic solvation free energies and electrostatic binding free energies. It is found that the relative absolute error of ΔGel obtained at the grid spacing of 1.0 Å compared to ΔGel at 0.2 Å averaged over 153 molecules is less than 0.2%. Our results indicate that the use of grid spacing 0.6 Å ensures accuracy and reliability in ΔΔGel calculation. In fact, the grid spacing of 1.1 Å appears to deliver adequate accuracy for high throughput screening. PMID:28211071

  5. A high order semi-Lagrangian discontinuous Galerkin method for Vlasov-Poisson simulations without operator splitting

    NASA Astrophysics Data System (ADS)

    Cai, Xiaofeng; Guo, Wei; Qiu, Jing-Mei

    2018-02-01

    In this paper, we develop a high order semi-Lagrangian (SL) discontinuous Galerkin (DG) method for nonlinear Vlasov-Poisson (VP) simulations without operator splitting. In particular, we combine two recently developed novel techniques: one is the high order non-splitting SLDG transport method (Cai et al. (2017) [4]), and the other is the high order characteristics tracing technique proposed in Qiu and Russo (2017) [29]. The proposed method with up to third order accuracy in both space and time is locally mass conservative, free of splitting error, positivity-preserving, stable and robust for large time stepping size. The SLDG VP solver is applied to classic benchmark test problems such as Landau damping and two-stream instabilities for VP simulations. Efficiency and effectiveness of the proposed scheme is extensively tested. Tremendous CPU savings are shown by comparisons between the proposed SL DG scheme and the classical Runge-Kutta DG method.

  6. ELLIPTICAL WEIGHTED HOLICs FOR WEAK LENSING SHEAR MEASUREMENT. III. THE EFFECT OF RANDOM COUNT NOISE ON IMAGE MOMENTS IN WEAK LENSING ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp

    This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less

  7. Joint reconstruction of PET-MRI by exploiting structural similarity

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Matthias J.; Thielemans, Kris; Pizarro, Luis; Atkinson, David; Ourselin, Sébastien; Hutton, Brian F.; Arridge, Simon R.

    2015-01-01

    Recent advances in technology have enabled the combination of positron emission tomography (PET) with magnetic resonance imaging (MRI). These PET-MRI scanners simultaneously acquire functional PET and anatomical or functional MRI data. As function and anatomy are not independent of one another the images to be reconstructed are likely to have shared structures. We aim to exploit this inherent structural similarity by reconstructing from both modalities in a joint reconstruction framework. The structural similarity between two modalities can be modelled in two different ways: edges are more likely to be at similar positions and/or to have similar orientations. We analyse the diffusion process generated by minimizing priors that encapsulate these different models. It turns out that the class of parallel level set priors always corresponds to anisotropic diffusion which is sometimes forward and sometimes backward diffusion. We perform numerical experiments where we jointly reconstruct from blurred Radon data with Poisson noise (PET) and under-sampled Fourier data with Gaussian noise (MRI). Our results show that both modalities benefit from each other in areas of shared edge information. The joint reconstructions have less artefacts and sharper edges compared to separate reconstructions and the ℓ2-error can be reduced in all of the considered cases of under-sampling.

  8. Refinement of Generalized Born Implicit Solvation Parameters for Nucleic Acids and their Complexes with Proteins

    PubMed Central

    Nguyen, Hai; Pérez, Alberto; Bermeo, Sherry; Simmerling, Carlos

    2016-01-01

    The Generalized Born (GB) implicit solvent model has undergone significant improvements in accuracy for modeling of proteins and small molecules. However, GB still remains a less widely explored option for nucleic acid simulations, in part because fast GB models are often unable to maintain stable nucleic acid structures, or they introduce structural bias in proteins, leading to difficulty in application of GB models in simulations of protein-nucleic acid complexes. Recently, GB-neck2 was developed to improve the behavior of protein simulations. In an effort to create a more accurate model for nucleic acids, a similar procedure to the development of GB-neck2 is described here for nucleic acids. The resulting parameter set significantly reduces absolute and relative energy error relative to Poisson Boltzmann for both nucleic acids and nucleic acid-protein complexes, when compared to its predecessor GB-neck model. This improvement in solvation energy calculation translates to increased structural stability for simulations of DNA and RNA duplexes, quadruplexes, and protein-nucleic acid complexes. The GB-neck2 model also enables successful folding of small DNA and RNA hairpins to near native structures as determined from comparison with experiment. The functional form and all required parameters are provided here and also implemented in the AMBER software. PMID:26574454

  9. Significant and Sustained Reduction in Chemotherapy Errors Through Improvement Science.

    PubMed

    Weiss, Brian D; Scott, Melissa; Demmel, Kathleen; Kotagal, Uma R; Perentesis, John P; Walsh, Kathleen E

    2017-04-01

    A majority of children with cancer are now cured with highly complex chemotherapy regimens incorporating multiple drugs and demanding monitoring schedules. The risk for error is high, and errors can occur at any stage in the process, from order generation to pharmacy formulation to bedside drug administration. Our objective was to describe a program to eliminate errors in chemotherapy use among children. To increase reporting of chemotherapy errors, we supplemented the hospital reporting system with a new chemotherapy near-miss reporting system. After the model for improvement, we then implemented several interventions, including a daily chemotherapy huddle, improvements to the preparation and delivery of intravenous therapy, headphones for clinicians ordering chemotherapy, and standards for chemotherapy administration throughout the hospital. Twenty-two months into the project, we saw a centerline shift in our U chart of chemotherapy errors that reached the patient from a baseline rate of 3.8 to 1.9 per 1,000 doses. This shift has been sustained for > 4 years. In Poisson regression analyses, we found an initial increase in error rates, followed by a significant decline in errors after 16 months of improvement work ( P < .001). After the model for improvement, our improvement efforts were associated with significant reductions in chemotherapy errors that reached the patient. Key drivers for our success included error vigilance through a huddle, standardization, and minimization of interruptions during ordering.

  10. A simulation study to quantify the impacts of exposure ...

    EPA Pesticide Factsheets

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health.MethodsZIP-code level estimates of exposure for six pollutants (CO, NOx, EC, PM2.5, SO4, O3) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error.Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs.ResultsSubstantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3–85% for population error, and 31–85% for total error. When CO, NOx or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copoll

  11. The BRST complex of homological Poisson reduction

    NASA Astrophysics Data System (ADS)

    Müller-Lennert, Martin

    2017-02-01

    BRST complexes are differential graded Poisson algebras. They are associated with a coisotropic ideal J of a Poisson algebra P and provide a description of the Poisson algebra (P/J)^J as their cohomology in degree zero. Using the notion of stable equivalence introduced in Felder and Kazhdan (Contemporary Mathematics 610, Perspectives in representation theory, 2014), we prove that any two BRST complexes associated with the same coisotropic ideal are quasi-isomorphic in the case P = R[V] where V is a finite-dimensional symplectic vector space and the bracket on P is induced by the symplectic structure on V. As a corollary, the cohomology of the BRST complexes is canonically associated with the coisotropic ideal J in the symplectic case. We do not require any regularity assumptions on the constraints generating the ideal J. We finally quantize the BRST complex rigorously in the presence of infinitely many ghost variables and discuss the uniqueness of the quantization procedure.

  12. Electrostatic forces in the Poisson-Boltzmann systems

    NASA Astrophysics Data System (ADS)

    Xiao, Li; Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2013-09-01

    Continuum modeling of electrostatic interactions based upon numerical solutions of the Poisson-Boltzmann equation has been widely used in structural and functional analyses of biomolecules. A limitation of the numerical strategies is that it is conceptually difficult to incorporate these types of models into molecular mechanics simulations, mainly because of the issue in assigning atomic forces. In this theoretical study, we first derived the Maxwell stress tensor for molecular systems obeying the full nonlinear Poisson-Boltzmann equation. We further derived formulations of analytical electrostatic forces given the Maxwell stress tensor and discussed the relations of the formulations with those published in the literature. We showed that the formulations derived from the Maxwell stress tensor require a weaker condition for its validity, applicable to nonlinear Poisson-Boltzmann systems with a finite number of singularities such as atomic point charges and the existence of discontinuous dielectric as in the widely used classical piece-wise constant dielectric models.

  13. Harnessing out-of-plane deformation to design 3D architected lattice metamaterials with tunable Poisson's ratio.

    PubMed

    Li, Tiantian; Hu, Xiaoyi; Chen, Yanyu; Wang, Lifeng

    2017-08-21

    Auxetic materials exhibiting a negative Poisson's ratio are of great research interest due to their unusual mechanical responses and a wide range of potential deployment. Efforts have been devoted to exploring novel 2D and 3D auxetic structures through rational design, optimization, and taking inspiration from nature. Here we report a 3D architected lattice system showing a negative Poisson's ratio over a wide range of applied uniaxial stretch. 3D printing, experimental tests, numerical simulation, and analytical modeling are implemented to quantify the evolution of the Poisson's ratio and reveal the underlying mechanisms responsible for this unusual behavior. We further show that the auxetic behavior can be controlled by tailoring the geometric features of the ligaments. The findings reported here provide a new routine to design architected metamaterial systems exhibiting unusual properties and having a wide range of potential applications.

  14. Evaluation of lattice sums by the Poisson sum formula

    NASA Technical Reports Server (NTRS)

    Ray, R. D.

    1975-01-01

    The Poisson sum formula was applied to the problem of summing pairwise interactions between an observer molecule and a semi-infinite regular array of solid state molecules. The transformed sum is often much more rapidly convergent than the original sum, and forms a Fourier series in the solid surface coordinates. The method is applicable to a variety of solid state structures and functional forms of the pairwise potential. As an illustration of the method, the electric field above the (100) face of the CsCl structure is calculated and compared to earlier results obtained by direct summation.

  15. Hope Modified the Association between Distress and Incidence of Self-Perceived Medical Errors among Practicing Physicians: Prospective Cohort Study

    PubMed Central

    Hayashino, Yasuaki; Utsugi-Ozaki, Makiko; Feldman, Mitchell D.; Fukuhara, Shunichi

    2012-01-01

    The presence of hope has been found to influence an individual's ability to cope with stressful situations. The objective of this study is to evaluate the relationship between medical errors, hope and burnout among practicing physicians using validated metrics. Prospective cohort study was conducted among hospital based physicians practicing in Japan (N = 836). Measures included the validated Burnout Scale, self-assessment of medical errors and Herth Hope Index (HHI). The main outcome measure was the frequency of self-perceived medical errors, and Poisson regression analysis was used to evaluate the association between hope and medical error. A total of 361 errors were reported in 836 physician-years. We observed a significant association between hope and self-report of medical errors. Compared with the lowest tertile category of HHI, incidence rate ratios (IRRs) of self-perceived medical errors of physicians in the highest category were 0.44 (95%CI, 0.34 to 0.58) and 0.54 (95%CI, 0.42 to 0.70) respectively, for the 2nd and 3rd tertile. In stratified analysis by hope score, among physicians with a low hope score, those who experienced higher burnout reported higher incidence of errors; physicians with high hope scores did not report high incidences of errors, even if they experienced high burnout. Self-perceived medical errors showed a strong association with physicians' hope, and hope modified the association between physicians' burnout and self-perceived medical errors. PMID:22530055

  16. Auxetic textiles.

    PubMed

    Rant, Darja; Rijavec, Tatjana; Pavko-Čuden, Alenka

    2013-01-01

    Common materials have Poisson's ratio values ranging from 0.0 to 0.5. Auxetic materials exhibit negative Poisson's ratio. They expand laterally when stretched longitudinally and contract laterally when compressed. In recent years the use of textile technology to fabricate auxetic materials has attracted more and more attention. It is reflected in the extent of available research work exploring the auxetic potential of various textile structures and subsequent increase in the number of research papers published. Generally there are two approaches to producing auxetic textiles. The first one includes the use of auxetic fibers to produce an auxetic textile structure, whereas the other utilizes conventional fibres to produce a textile structure with auxetic properties. This review deals with auxetic materials in general and in the specific context of auxetic polymers, auxetic fibers, and auxetic textile structures made from conventional fibers and knitted structures with auxetic potential.

  17. Tensile properties of helical auxetic structures: A numerical study

    NASA Astrophysics Data System (ADS)

    Wright, J. R.; Sloan, M. R.; Evans, K. E.

    2010-08-01

    This paper discusses a helical auxetic structure which has a diverse range of practical applications. The mechanical properties of the system can be determined by particular combinations of geometry and component material properties; finite element analysis is used to investigate the static behavior of these structures under tension. Modeling criteria are determined and design issues are discussed. A description of the different strain-dependent mechanical phases is provided. It is shown that the stiffnesses of the component fibers and the initial helical wrap angle are critical design parameters, and that strain-dependent changes in cross-section must be taken into consideration: we observe that the structures exhibit nonlinear behavior due to nonzero component Poisson's ratios. Negative Poisson's ratios for the helical structures as low as -5 are shown. While we focus here on the structure as a yarn our findings are, in principle, scaleable.

  18. Quantifying biological samples using Linear Poisson Independent Component Analysis for MALDI-ToF mass spectra

    PubMed Central

    Deepaisarn, S; Tar, P D; Thacker, N A; Seepujak, A; McMahon, A W

    2018-01-01

    Abstract Motivation Matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI) facilitates the analysis of large organic molecules. However, the complexity of biological samples and MALDI data acquisition leads to high levels of variation, making reliable quantification of samples difficult. We present a new analysis approach that we believe is well-suited to the properties of MALDI mass spectra, based upon an Independent Component Analysis derived for Poisson sampled data. Simple analyses have been limited to studying small numbers of mass peaks, via peak ratios, which is known to be inefficient. Conventional PCA and ICA methods have also been applied, which extract correlations between any number of peaks, but we argue makes inappropriate assumptions regarding data noise, i.e. uniform and Gaussian. Results We provide evidence that the Gaussian assumption is incorrect, motivating the need for our Poisson approach. The method is demonstrated by making proportion measurements from lipid-rich binary mixtures of lamb brain and liver, and also goat and cow milk. These allow our measurements and error predictions to be compared to ground truth. Availability and implementation Software is available via the open source image analysis system TINA Vision, www.tina-vision.net. Contact paul.tar@manchester.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29091994

  19. Auxetic Mechanical Metamaterials to Enhance Sensitivity of Stretchable Strain Sensors.

    PubMed

    Jiang, Ying; Liu, Zhiyuan; Matsuhisa, Naoji; Qi, Dianpeng; Leow, Wan Ru; Yang, Hui; Yu, Jiancan; Chen, Geng; Liu, Yaqing; Wan, Changjin; Liu, Zhuangjian; Chen, Xiaodong

    2018-03-01

    Stretchable strain sensors play a pivotal role in wearable devices, soft robotics, and Internet-of-Things, yet these viable applications, which require subtle strain detection under various strain, are often limited by low sensitivity. This inadequate sensitivity stems from the Poisson effect in conventional strain sensors, where stretched elastomer substrates expand in the longitudinal direction but compress transversely. In stretchable strain sensors, expansion separates the active materials and contributes to the sensitivity, while Poisson compression squeezes active materials together, and thus intrinsically limits the sensitivity. Alternatively, auxetic mechanical metamaterials undergo 2D expansion in both directions, due to their negative structural Poisson's ratio. Herein, it is demonstrated that such auxetic metamaterials can be incorporated into stretchable strain sensors to significantly enhance the sensitivity. Compared to conventional sensors, the sensitivity is greatly elevated with a 24-fold improvement. This sensitivity enhancement is due to the synergistic effect of reduced structural Poisson's ratio and strain concentration. Furthermore, microcracks are elongated as an underlying mechanism, verified by both experiments and numerical simulations. This strategy of employing auxetic metamaterials can be further applied to other stretchable strain sensors with different constituent materials. Moreover, it paves the way for utilizing mechanical metamaterials into a broader library of stretchable electronics. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE PAGES

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse; ...

    2017-09-19

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  1. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  2. Factor-Analysis Methods for Higher-Performance Neural Prostheses

    PubMed Central

    Santhanam, Gopal; Yu, Byron M.; Gilja, Vikash; Ryu, Stephen I.; Afshar, Afsheen; Sahani, Maneesh; Shenoy, Krishna V.

    2009-01-01

    Neural prostheses aim to provide treatment options for individuals with nervous-system disease or injury. It is necessary, however, to increase the performance of such systems before they can be clinically viable for patients with motor dysfunction. One performance limitation is the presence of correlated trial-to-trial variability that can cause neural responses to wax and wane in concert as the subject is, for example, more attentive or more fatigued. If a system does not properly account for this variability, it may mistakenly interpret such variability as an entirely different intention by the subject. We report here the design and characterization of factor-analysis (FA)–based decoding algorithms that can contend with this confound. We characterize the decoders (classifiers) on experimental data where monkeys performed both a real reach task and a prosthetic cursor task while we recorded from 96 electrodes implanted in dorsal premotor cortex. The decoder attempts to infer the underlying factors that comodulate the neurons' responses and can use this information to substantially lower error rates (one of eight reach endpoint predictions) by ≲75% (e.g., ∼20% total prediction error using traditional independent Poisson models reduced to ∼5%). We also examine additional key aspects of these new algorithms: the effect of neural integration window length on performance, an extension of the algorithms to use Poisson statistics, and the effect of training set size on the decoding accuracy of test data. We found that FA-based methods are most effective for integration windows >150 ms, although still advantageous at shorter timescales, that Gaussian-based algorithms performed better than the analogous Poisson-based algorithms and that the FA algorithm is robust even with a limited amount of training data. We propose that FA-based methods are effective in modeling correlated trial-to-trial neural variability and can be used to substantially increase overall prosthetic system performance. PMID:19297518

  3. Bayesian dynamic modeling of time series of dengue disease case counts

    PubMed Central

    López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-01-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941

  4. On the origin of dual Lax pairs and their r-matrix structure

    NASA Astrophysics Data System (ADS)

    Avan, Jean; Caudrelier, Vincent

    2017-10-01

    We establish the algebraic origin of the following observations made previously by the authors and coworkers: (i) A given integrable PDE in 1 + 1 dimensions within the Zakharov-Shabat scheme related to a Lax pair can be cast in two distinct, dual Hamiltonian formulations; (ii) Associated to each formulation is a Poisson bracket and a phase space (which are not compatible in the sense of Magri); (iii) Each matrix in the Lax pair satisfies a linear Poisson algebra a la Sklyanin characterized by the same classical r matrix. We develop the general concept of dual Lax pairs and dual Hamiltonian formulation of an integrable field theory. We elucidate the origin of the common r-matrix structure by tracing it back to a single Lie-Poisson bracket on a suitable coadjoint orbit of the loop algebra sl(2 , C) ⊗ C(λ ,λ-1) . The results are illustrated with the examples of the nonlinear Schrödinger and Gerdjikov-Ivanov hierarchies.

  5. Dynamic state estimation based on Poisson spike trains—towards a theory of optimal encoding

    NASA Astrophysics Data System (ADS)

    Susemihl, Alex; Meir, Ron; Opper, Manfred

    2013-03-01

    Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and, by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.

  6. Feasibility Study on 3-D Printing of Metallic Structural Materials with Robotized Laser-Based Metal Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Ding, Yaoyu; Kovacevic, Radovan

    2016-07-01

    Metallic structural materials continue to open new avenues in achieving exotic mechanical properties that are naturally unavailable. They hold great potential in developing novel products in diverse industries such as the automotive, aerospace, biomedical, oil and gas, and defense. Currently, the use of metallic structural materials in industry is still limited because of difficulties in their manufacturing. This article studied the feasibility of printing metallic structural materials with robotized laser-based metal additive manufacturing (RLMAM). In this study, two metallic structural materials characterized by an enlarged positive Poisson's ratio and a negative Poisson's ratio were designed and simulated, respectively. An RLMAM system developed at the Research Center for Advanced Manufacturing of Southern Methodist University was used to print them. The results of the tensile tests indicated that the printed samples successfully achieved the corresponding mechanical properties.

  7. Elliptic Euler-Poisson-Darboux equation, critical points and integrable systems

    NASA Astrophysics Data System (ADS)

    Konopelchenko, B. G.; Ortenzi, G.

    2013-12-01

    The structure and properties of families of critical points for classes of functions W(z,{\\overline{z}}) obeying the elliptic Euler-Poisson-Darboux equation E(1/2, 1/2) are studied. General variational and differential equations governing the dependence of critical points in variational (deformation) parameters are found. Explicit examples of the corresponding integrable quasi-linear differential systems and hierarchies are presented. There are the extended dispersionless Toda/nonlinear Schrödinger hierarchies, the ‘inverse’ hierarchy and equations associated with the real-analytic Eisenstein series E(\\beta ,{\\overline{\\beta }};1/2) among them. The specific bi-Hamiltonian structure of these equations is also discussed.

  8. Quasi-Hamiltonian structure and Hojman construction

    NASA Astrophysics Data System (ADS)

    Carinena, Jose F.; Guha, Partha; Ranada, Manuel F.

    2007-08-01

    Given a smooth vector field [Gamma] and assuming the knowledge of an infinitesimal symmetry X, Hojman [S. Hojman, The construction of a Poisson structure out of a symmetry and a conservation law of a dynamical system, J. Phys. A Math. Gen. 29 (1996) 667-674] proposed a method for finding both a Poisson tensor and a function H such that [Gamma] is the corresponding Hamiltonian system. In this paper, we approach the problem from geometrical point of view. The geometrization leads to the clarification of several concepts and methods used in Hojman's paper. In particular, the relationship between the nonstandard Hamiltonian structure proposed by Hojman and the degenerate quasi-Hamiltonian structures introduced by Crampin and Sarlet [M. Crampin, W. Sarlet, Bi-quasi-Hamiltonian systems, J. Math. Phys. 43 (2002) 2505-2517] is unveiled in this paper. We also provide some applications of our construction.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ting

    Over the last two decades, our understanding of the Milky Way has been improved thanks to large data sets arising from large-area digital sky surveys. The stellar halo is now known to be inhabited by a variety of spatial and kinematic stellar substructures, including stellar streams and stellar clouds, all of which are predicted by hierarchical Lambda Cold Dark Matter models of galaxy formation. In this dissertation, we first present the analysis of spectroscopic observations of individual stars from the two candidate structures discovered using an M-giant catalog from the Two Micron All-Sky Survey. The follow-up observations show that onemore » of the candidates is a genuine structure which might be associated with the Galactic Anticenter Stellar Structure, while the other one is a false detection due to the systematic photometric errors in the survey or dust extinction in low Galactic latitudes. We then presented the discovery of an excess of main sequence turn-off stars in the direction of the constellations of Eridanus and Phoenix from the first-year data of the Dark Energy Survey (DES) – a five-year, 5,000 deg2 optical imaging survey in the Southern Hemisphere. The Eridanus-Phoenix (EriPhe) overdensity is centered around l ~ 285° and b ~ -60° and the Poisson significance of the detection is at least 9σ. The EriPhe overdensity has a cloud-like morphology and the extent is at least ~ 4 kpc by ~ 3 kpc in projection, with a heliocentric distance of about d ~ 16 kpc. The EriPhe overdensity is morphologically similar to the previously-discovered Virgo overdensity and Hercules-Aquila cloud. These three overdensities lie along a polar plane separated by ~ 120° and may share a common origin. In addition to the scientific discoveries, we also present the work to improve the photometric calibration in DES using auxiliary calibration systems, since the photometric errors can cause false detection in first the halo substructure. We present a detailed description of the two auxiliary calibration systems built at Texas A&M University. We then discuss how the auxiliary systems in DES can be used to improve the photometric calibration of the systematic chromatic errors – source color-dependent systematic errors that are caused by variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput.« less

  10. Auxetic behaviour from rotating rigid units

    NASA Astrophysics Data System (ADS)

    Grima, J. N.; Alderson, A.; Evans, K. E.

    2005-03-01

    Auxetic materials exhibit the unexpected feature of becoming fatter when stretched and narrower when compressed, in other words, they exhibit a negative Poisson's ratio. This counter-intuitive behaviour imparts many beneficial effects on the material's macroscopic properties that make auxetics superior to conventional materials in many commercial applications. Recent research suggests that auxetic be-haviour generally results from a cooperative effect between the material's internal structure (geometry setup) and the deformation mechanism it undergoes when submitted to a stress. Auxetic behaviour is also known to be scale-independent, and thus, the same geometry/deformation mechanism may operate at the macro-, micro- and nano- (molecular) level. A considerable amount of research has been focused on the re-entrant honeycomb structure which exhibits auxetic behaviour if deformed through hinging at the joints or flexure of the ribs, and it was proposed that this re-entrant geometry plays an impor- tant role in generating auxetic behaviour in various forms of materials ranging from nanostructured polymers to foams. This paper discusses an alternative mode of deformation involving rotating rigid units which also results in negative Poisson's ratios. In its most ideal form, this mechanism may be construc- ted in two dimensions using rigid polygons connected together through hinges at their vertices. On application of uniaxial loads, these rigid polygons rotate with respect to each other to form a more open structure hence giving rise to a negative Poisson's ratio. This paper also discusses the role that rotating rigid units are thought to have in various classes of materials to give rise to negative Poisson's ratios.

  11. Poisson sigma models, reduction and nonlinear gauge theories

    NASA Astrophysics Data System (ADS)

    Signori, Daniele

    This dissertation comprises two main lines of research. Firstly, we study non-linear gauge theories for principal bundles, where the structure group is replaced by a Lie groupoid. We follow the approach of Moerdijk-Mrcun and establish its relation with the existing physics literature. In particular, we derive a new formula for the gauge transformation which closely resembles and generalizes the classical formulas found in Yang Mills gauge theories. Secondly, we give a field theoretic interpretation of the of the BRST (Becchi-Rouet-Stora-Tyutin) and BFV (Batalin-Fradkin-Vilkovisky) methods for the reduction of coisotropic submanifolds of Poisson manifolds. The generalized Poisson sigma models that we define are related to the quantization deformation problems of coisotropic submanifolds using homotopical algebras.

  12. A generalized Poisson solver for first-principles device simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: joost.vandevondele@mat.ethz.ch; Brück, Sascha

    2016-01-28

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative methodmore » in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.« less

  13. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.

  14. An analytical method for the inverse Cauchy problem of Lame equation in a rectangle

    NASA Astrophysics Data System (ADS)

    Grigor’ev, Yu

    2018-04-01

    In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.

  15. Spatial resolution properties of motion-compensated tomographic image reconstruction methods.

    PubMed

    Chun, Se Young; Fessler, Jeffrey A

    2012-07-01

    Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.

  16. Models for Serially Correlated Over or Underdispersed Unequally Spaced Longitudinal Count Data with Applications to Asthma Inhaler Use

    DTIC Science & Technology

    2007-08-01

    the gamma prior and Poisson counts are conditioned on an unobserved AR( 1 ) process that accounts for the time since the last observation . This model did...to the observation equation. For unequally spaced observations the AR( 1 ) errors are replaced by a continuous time AR( 1 ) process , and the distance...unequal spaced observations are handled in the XJG model by assuming an underlying continuous time AR( 1 ) (CAR(l)) process . It is implemented by

  17. Analysis of Aerosols and Fallout from High-Explosive Dust Clouds. Volume 2

    DTIC Science & Technology

    1977-03-01

    to the situation at hand, where S is an absolute error, AIV- 30 Rio • N(i, - N(i,2) It is to be noted, however, that the Poisson formula specifies the...sphere TNT detonation near Grand Junction, Colorado, on November 13, 1972. Data from the resulting dust cloud was collected by two aircraft and includes...variations. rio Measurements of carbon monoxide were inconclusive due to an unusually high noise level (6 to 8 ppm or considerably higher than the carbon

  18. Effect of collisions on photoelectron sheath in a gas

    NASA Astrophysics Data System (ADS)

    Sodha, Mahendra Singh; Mishra, S. K.

    2016-02-01

    This paper presents a study of the effect of the collision of electrons with atoms/molecules on the structure of a photoelectron sheath. Considering the half Fermi-Dirac distribution of photo-emitted electrons, an expression for the electron density in the sheath has been derived in terms of the electric potential and the structure of the sheath has been investigated by incorporating Poisson's equation in the analysis. The method of successive approximations has been used to solve Poisson's equation with the solution for the electric potential in the case of vacuum, obtained earlier [Sodha and Mishra, Phys. Plasmas 21, 093704 (2014)], being used as the zeroth order solution for the present analysis. The inclusion of collisions influences the photoelectron sheath structure significantly; a reduction in the sheath width with increasing collisions is obtained.

  19. New superfield extension of Boussinesq and its (x,t) interchanged equation from odd Poisson bracket

    NASA Astrophysics Data System (ADS)

    Palit, S.; Chowdhury, A. Roy

    1995-08-01

    A new superfield extension of the Boussinesq equation and its corresponding (x,t) interchanged variant are deduced from the odd Poisson-bracket-formalism, which is similar to the antibracket of Batalin and Vilkovisky. In the former case we obtain the equation deduced by Figueroa-O'Farrill et al from a different approach. In each case we have deduced the bi-Hamiltonian structure and some basic symmetries associated with them.

  20. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    PubMed

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  1. Minimizing the stochasticity of halos in large-scale structure surveys

    NASA Astrophysics Data System (ADS)

    Hamaus, Nico; Seljak, Uroš; Desjacques, Vincent; Smith, Robert E.; Baldauf, Tobias

    2010-08-01

    In recent work (Seljak, Hamaus, and Desjacques 2009) it was found that weighting central halo galaxies by halo mass can significantly suppress their stochasticity relative to the dark matter, well below the Poisson model expectation. This is useful for constraining relations between galaxies and the dark matter, such as the galaxy bias, especially in situations where sampling variance errors can be eliminated. In this paper we extend this study with the goal of finding the optimal mass-dependent halo weighting. We use N-body simulations to perform a general analysis of halo stochasticity and its dependence on halo mass. We investigate the stochasticity matrix, defined as Cij≡⟨(δi-biδm)(δj-bjδm)⟩, where δm is the dark matter overdensity in Fourier space, δi the halo overdensity of the i-th halo mass bin, and bi the corresponding halo bias. In contrast to the Poisson model predictions we detect nonvanishing correlations between different mass bins. We also find the diagonal terms to be sub-Poissonian for the highest-mass halos. The diagonalization of this matrix results in one large and one low eigenvalue, with the remaining eigenvalues close to the Poisson prediction 1/n¯, where n¯ is the mean halo number density. The eigenmode with the lowest eigenvalue contains most of the information and the corresponding eigenvector provides an optimal weighting function to minimize the stochasticity between halos and dark matter. We find this optimal weighting function to match linear mass weighting at high masses, while at the low-mass end the weights approach a constant whose value depends on the low-mass cut in the halo mass function. This weighting further suppresses the stochasticity as compared to the previously explored mass weighting. Finally, we employ the halo model to derive the stochasticity matrix and the scale-dependent bias from an analytical perspective. It is remarkably successful in reproducing our numerical results and predicts that the stochasticity between halos and the dark matter can be reduced further when going to halo masses lower than we can resolve in current simulations.

  2. Updating a preoperative surface model with information from real-time tracked 2D ultrasound using a Poisson surface reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Deyu; Rettmann, Maryam E.; Holmes, David R.; Linte, Cristian A.; Packer, Douglas; Robb, Richard A.

    2014-03-01

    In this work, we propose a method for intraoperative reconstruction of a left atrial surface model for the application of cardiac ablation therapy. In this approach, the intraoperative point cloud is acquired by a tracked, 2D freehand intra-cardiac echocardiography device, which is registered and merged with a preoperative, high resolution left atrial surface model built from computed tomography data. For the surface reconstruction, we introduce a novel method to estimate the normal vector of the point cloud from the preoperative left atrial model, which is required for the Poisson Equation Reconstruction algorithm. In the current work, the algorithm is evaluated using a preoperative surface model from patient computed tomography data and simulated intraoperative ultrasound data. Factors such as intraoperative deformation of the left atrium, proportion of the left atrial surface sampled by the ultrasound, sampling resolution, sampling noise, and registration error were considered through a series of simulation experiments.

  3. BRST theory without Hamiltonian and Lagrangian

    NASA Astrophysics Data System (ADS)

    Lyakhovich, S. L.; Sharapov, A. A.

    2005-03-01

    We consider a generic gauge system, whose physical degrees of freedom are obtained by restriction on a constraint surface followed by factorization with respect to the action of gauge transformations; in so doing, no Hamiltonian structure or action principle is supposed to exist. For such a generic gauge system we construct a consistent BRST formulation, which includes the conventional BV Lagrangian and BFV Hamiltonian schemes as particular cases. If the original manifold carries a weak Poisson structure (a bivector field giving rise to a Poisson bracket on the space of physical observables) the generic gauge system is shown to admit deformation quantization by means of the Kontsevich formality theorem. A sigma-model interpretation of this quantization algorithm is briefly discussed.

  4. Poisson structure on a space with linear SU(2) fuzziness

    NASA Astrophysics Data System (ADS)

    Khorrami, Mohammad; Fatollahi, Amir H.; Shariati, Ahmad

    2009-07-01

    The Poisson structure is constructed for a model in which spatial coordinates of configuration space are noncommutative and satisfy the commutation relations of a Lie algebra. The case is specialized to that of the group SU(2), for which the counterpart of the angular momentum as well as the Euler parametrization of the phase space are introduced. SU(2)-invariant classical systems are discussed, and it is observed that the path of particle can be obtained by the solution of a first-order equation, as the case with such models on commutative spaces. The examples of free particle, rotationally invariant potentials, and specially the isotropic harmonic oscillator are investigated in more detail.

  5. Existence and uniqueness, attraction for stochastic age-structured population systems with diffusion and Poisson jump

    NASA Astrophysics Data System (ADS)

    Chen, Huabin

    2013-08-01

    In this paper, the problems about the existence and uniqueness, attraction for strong solution of stochastic age-structured population systems with diffusion and Poisson jump are considered. Under the non-Lipschitz condition with the Lipschitz condition being considered as a special case, the existence and uniqueness for such systems is firstly proved by using the Burkholder-Davis-Gundy inequality (B-D-G inequality) and Itô's formula. And then by using a novel inequality technique, some sufficient conditions ensuring the existence for the domain of attraction are established. As another by-product, the exponential stability in mean square moment of strong solution for such systems can be also discussed.

  6. Functional linear models for zero-inflated count data with application to modeling hospitalizations in patients on dialysis.

    PubMed

    Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V

    2014-11-30

    We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.

  7. A Poisson nonnegative matrix factorization method with parameter subspace clustering constraint for endmember extraction in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Ma, Jun; Yang, Gang; Du, Bo; Zhang, Liangpei

    2017-06-01

    A new Bayesian method named Poisson Nonnegative Matrix Factorization with Parameter Subspace Clustering Constraint (PNMF-PSCC) has been presented to extract endmembers from Hyperspectral Imagery (HSI). First, the method integrates the liner spectral mixture model with the Bayesian framework and it formulates endmember extraction into a Bayesian inference problem. Second, the Parameter Subspace Clustering Constraint (PSCC) is incorporated into the statistical program to consider the clustering of all pixels in the parameter subspace. The PSCC could enlarge differences among ground objects and helps finding endmembers with smaller spectrum divergences. Meanwhile, the PNMF-PSCC method utilizes the Poisson distribution as the prior knowledge of spectral signals to better explain the quantum nature of light in imaging spectrometer. Third, the optimization problem of PNMF-PSCC is formulated into maximizing the joint density via the Maximum A Posterior (MAP) estimator. The program is finally solved by iteratively optimizing two sub-problems via the Alternating Direction Method of Multipliers (ADMM) framework and the FURTHESTSUM initialization scheme. Five state-of-the art methods are implemented to make comparisons with the performance of PNMF-PSCC on both the synthetic and real HSI datasets. Experimental results show that the PNMF-PSCC outperforms all the five methods in Spectral Angle Distance (SAD) and Root-Mean-Square-Error (RMSE), and especially it could identify good endmembers for ground objects with smaller spectrum divergences.

  8. On the Geometry of the Hamilton-Jacobi Equation and Generating Functions

    NASA Astrophysics Data System (ADS)

    Ferraro, Sebastián; de León, Manuel; Marrero, Juan Carlos; Martín de Diego, David; Vaquero, Miguel

    2017-10-01

    In this paper we develop a geometric version of the Hamilton-Jacobi equation in the Poisson setting. Specifically, we "geometrize" what is usually called a complete solution of the Hamilton-Jacobi equation. We use some well-known results about symplectic groupoids, in particular cotangent groupoids, as a keystone for the construction of our framework. Our methodology follows the ambitious program proposed by Weinstein (In Mechanics day (Waterloo, ON, 1992), volume 7 of fields institute communications, American Mathematical Society, Providence, 1996) in order to develop geometric formulations of the dynamical behavior of Lagrangian and Hamiltonian systems on Lie algebroids and Lie groupoids. This procedure allows us to take symmetries into account, and, as a by-product, we recover results from Channell and Scovel (Phys D 50(1):80-88, 1991), Ge (Indiana Univ. Math. J. 39(3):859-876, 1990), Ge and Marsden (Phys Lett A 133(3):134-139, 1988), but even in these situations our approach is new. A theory of generating functions for the Poisson structures considered here is also developed following the same pattern, solving a longstanding problem of the area: how to obtain a generating function for the identity transformation and the nearby Poisson automorphisms of Poisson manifolds. A direct application of our results gives the construction of a family of Poisson integrators, that is, integrators that conserve the underlying Poisson geometry. These integrators are implemented in the paper in benchmark problems. Some conclusions, current and future directions of research are shown at the end of the paper.

  9. Crustal structure of the Transantarctic Mountains, Ellsworth Mountains and Marie Byrd Land, Antarctica: constraints on shear wave velocities, Poisson's ratios and Moho depths

    NASA Astrophysics Data System (ADS)

    Ramirez, C.; Nyblade, A.; Emry, E. L.; Julià, J.; Sun, X.; Anandakrishnan, S.; Wiens, D. A.; Aster, R. C.; Huerta, A. D.; Winberry, P.; Wilson, T.

    2017-12-01

    A uniform set of crustal parameters for seismic stations deployed on rock in West Antarctica and the Transantarctic Mountains (TAM) has been obtained to help elucidate similarities and differences in crustal structure within and between several tectonic blocks that make up these regions. P-wave receiver functions have been analysed using the H-κ stacking method to develop estimates of thickness and bulk Poisson's ratio for the crust, and jointly inverted with surface wave dispersion measurements to obtain depth-dependent shear wave velocity models for the crust and uppermost mantle. The results from 33 stations are reported, including three stations for which no previous results were available. The average crustal thickness is 30 ± 5 km along the TAM front, and 38 ± 2 km in the interior of the mountain range. The average Poisson's ratios for these two regions are 0.25 ± 0.03 and 0.26 ± 0.02, respectively, and they have similar average crustal Vs of 3.7 ± 0.1 km s-1. At multiple stations within the TAM, we observe evidence for mafic layering within or at the base of the crust, which may have resulted from the Ferrar magmatic event. The Ellsworth Mountains have an average crustal thickness of 37 ± 2 km, a Poisson's ratio of 0.27, and average crustal Vs of 3.7 ± 0.1 km s-1, similar to the TAM. This similarity is consistent with interpretations of the Ellsworth Mountains as a tectonically rotated TAM block. The Ross Island region has an average Moho depth of 25 ± 1 km, an average crustal Vs of 3.6 ± 0.1 km s-1 and Poisson's ratio of 0.30, consistent with the mafic Cenozoic volcanism found there and its proximity to the Terror Rift. Marie Byrd Land has an average crustal thickness of 30 ± 2 km, Poisson's ratio of 0.25 ± 0.04 and crustal Vs of 3.7 ± 0.1 km s-1. One station (SILY) in Marie Byrd Land is near an area of recent volcanism and deep (25-40 km) seismicity, and has a high Poisson's ratio, consistent with the presence of partial melt in the crust.

  10. Fractional Relativistic Yamaleev Oscillator Model and Its Dynamical Behaviors

    NASA Astrophysics Data System (ADS)

    Luo, Shao-Kai; He, Jin-Man; Xu, Yan-Li; Zhang, Xiao-Tian

    2016-07-01

    In the paper we construct a new kind of fractional dynamical model, i.e. the fractional relativistic Yamaleev oscillator model, and explore its dynamical behaviors. We will find that the fractional relativistic Yamaleev oscillator model possesses Lie algebraic structure and satisfies generalized Poisson conservation law. We will also give the Poisson conserved quantities of the model. Further, the relation between conserved quantities and integral invariants of the model is studied and it is proved that, by using the Poisson conserved quantities, we can construct integral invariants of the model. Finally, the stability of the manifold of equilibrium states of the fractional relativistic Yamaleev oscillator model is studied. The paper provides a general method, i.e. fractional generalized Hamiltonian method, for constructing a family of fractional dynamical models of an actual dynamical system.

  11. Fractional Brownian motion and long term clinical trial recruitment

    PubMed Central

    Zhang, Qiang; Lai, Dejian

    2015-01-01

    Prediction of recruitment in clinical trials has been a challenging task. Many methods have been studied, including models based on Poisson process and its large sample approximation by Brownian motion (BM), however, when the independent incremental structure is violated for BM model, we could use fractional Brownian motion to model and approximate the underlying Poisson processes with random rates. In this paper, fractional Brownian motion (FBM) is considered for such conditions and compared to BM model with illustrated examples from different trials and simulations. PMID:26347306

  12. BFV-BRST analysis of equivalence between noncommutative and ordinary gauge theories

    NASA Astrophysics Data System (ADS)

    Dayi, O. F.

    2000-05-01

    Constrained hamiltonian structure of noncommutative gauge theory for the gauge group /U(1) is discussed. Constraints are shown to be first class, although, they do not give an Abelian algebra in terms of Poisson brackets. The related BFV-BRST charge gives a vanishing generalized Poisson bracket by itself due to the associativity of /*-product. Equivalence of noncommutative and ordinary gauge theories is formulated in generalized phase space by using BFV-BRST charge and a solution is obtained. Gauge fixing is discussed.

  13. Fractional Brownian motion and long term clinical trial recruitment.

    PubMed

    Zhang, Qiang; Lai, Dejian

    2011-05-01

    Prediction of recruitment in clinical trials has been a challenging task. Many methods have been studied, including models based on Poisson process and its large sample approximation by Brownian motion (BM), however, when the independent incremental structure is violated for BM model, we could use fractional Brownian motion to model and approximate the underlying Poisson processes with random rates. In this paper, fractional Brownian motion (FBM) is considered for such conditions and compared to BM model with illustrated examples from different trials and simulations.

  14. Beyond single-stream with the Schrödinger method

    NASA Astrophysics Data System (ADS)

    Uhlemann, Cora; Kopp, Michael

    2016-10-01

    We investigate large scale structure formation of collisionless dark matter in the phase space description based on the Vlasov-Poisson equation. We present the Schrödinger method, originally proposed by \\cite{WK93} as numerical technique based on the Schrödinger Poisson equation, as an analytical tool which is superior to the common standard pressureless fluid model. Whereas the dust model fails and develops singularities at shell crossing the Schrödinger method encompasses multi-streaming and even virialization.

  15. Poisson's ratio of collagen fibrils measured by small angle X-ray scattering of strained bovine pericardium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, Hannah C.; Sizeland, Katie H.; Kayed, Hanan R.

    Type I collagen is the main structural component of skin, tendons, and skin products, such as leather. Understanding the mechanical performance of collagen fibrils is important for understanding the mechanical performance of the tissues that they make up, while the mechanical properties of bulk tissue are well characterized, less is known about the mechanical behavior of individual collagen fibrils. In this study, bovine pericardium is subjected to strain while small angle X-ray scattering (SAXS) patterns are recorded using synchrotron radiation. The change in d-spacing, which is a measure of fibril extension, and the change in fibril diameter are determined frommore » SAXS. The tissue is strained 0.25 (25%) with a corresponding strain in the collagen fibrils of 0.045 observed. The ratio of collagen fibril width contraction to length extension, or the Poisson's ratio, is 2.1 ± 0.7 for a tissue strain from 0 to 0.25. This Poisson's ratio indicates that the volume of individual collagen fibrils decreases with increasing strain, which is quite unlike most engineering materials. This high Poisson's ratio of individual fibrils may contribute to high Poisson's ratio observed for tissues, contributing to some of the remarkable properties of collagen-based materials.« less

  16. Probabilistic structural analysis methods for improving Space Shuttle engine reliability

    NASA Technical Reports Server (NTRS)

    Boyce, L.

    1989-01-01

    Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.

  17. Mechanical and Thermophysical Properties of Cubic Rock-Salt AlN Under High Pressure

    NASA Astrophysics Data System (ADS)

    Lebga, Noudjoud; Daoud, Salah; Sun, Xiao-Wei; Bioud, Nadhira; Latreche, Abdelhakim

    2018-03-01

    Density functional theory, density functional perturbation theory, and the Debye model have been used to investigate the structural, elastic, sound velocity, and thermodynamic properties of AlN with cubic rock-salt structure under high pressure, yielding the equilibrium structural parameters, equation of state, and elastic constants of this interesting material. The isotropic shear modulus, Pugh ratio, and Poisson's ratio were also investigated carefully. In addition, the longitudinal, transverse, and average elastic wave velocities, phonon contribution to the thermal conductivity, and interesting thermodynamic properties were predicted and analyzed in detail. The results demonstrate that the behavior of the elastic wave velocities under increasing hydrostatic pressure explains the hardening of the corresponding phonons. Based on the elastic stability criteria under pressure, it is found that AlN with cubic rock-salt structure is mechanically stable, even at pressures up to 100 GPa. Analysis of the Pugh ratio and Poisson's ratio revealed that AlN with cubic rock-salt structure behaves in brittle manner.

  18. Effect of stiffness characteristics on the response of composite grid-stiffened structures

    NASA Technical Reports Server (NTRS)

    Ambur, Damodar R.; Rehfield, Lawrence W.

    1991-01-01

    A study of the effect of stiffness discontinuities and structural parameters on the response of continuous-filament grid-stiffened flat panels is presented. The buckling load degradation due to manufacturing-introduced stiffener discontinuities associated with a filament cut-and-add approach at the stiffener intersections is investigated. The degradation of buckling resistance in isogrid flat panels subjected to uni-axial compression and combined axial compression and shear loading conditions and induced damage is quantified using FEM. The combined loading case is the most critical one. Nonsolid stiffener cross sections, such as a foam-filled blade or hat with a 0-deg dominant cap, result in grid-stiffened structures that are structurally very efficient for wing and fuselage applications. The results of a study of the ability of grid-stiffened structural concepts to enhance the effective Poisson's ratio of a panel are presented. Grid-stiffened concepts create a highly effective Poisson's ratio, which can produce large camber deformations for certain elastic tailoring applications.

  19. Evaluation of malaria rapid diagnostic test (RDT) use by community health workers: a longitudinal study in western Kenya.

    PubMed

    Boyce, Matthew R; Menya, Diana; Turner, Elizabeth L; Laktabai, Jeremiah; Prudhomme-O'Meara, Wendy

    2018-05-18

    Malaria rapid diagnostic tests (RDTs) are a simple, point-of-care technology that can improve the diagnosis and subsequent treatment of malaria. They are an increasingly common diagnostic tool, but concerns remain about their use by community health workers (CHWs). These concerns regard the long-term trends relating to infection prevention measures, the interpretation of test results and adherence to treatment protocols. This study assessed whether CHWs maintained their competency at conducting RDTs over a 12-month timeframe, and if this competency varied with specific CHW characteristics. From June to September, 2015, CHWs (n = 271) were trained to conduct RDTs using a 3-day validated curriculum and a baseline assessment was completed. Between June and August, 2016, CHWs (n = 105) were randomly selected and recruited for follow-up assessments using a 20-step checklist that classified steps as relating to safety, accuracy, and treatment; 103 CHWs participated in follow-up assessments. Poisson regressions were used to test for associations between error count data at follow-up and Poisson regression models fit using generalized estimating equations were used to compare data across time-points. At both baseline and follow-up observations, at least 80% of CHWs correctly completed 17 of the 20 steps. CHWs being 50 years of age or older was associated with increased total errors and safety errors at baseline and follow-up. At follow-up, prior experience conducting RDTs was associated with fewer errors. Performance, as it related to the correct completion of all checklist steps and safety steps, did not decline over the 12 months and performance of accuracy steps improved (mean error ratio: 0.51; 95% CI 0.40-0.63). Visual interpretation of RDT results yielded a CHW sensitivity of 92.0% and a specificity of 97.3% when compared to interpretation by the research team. None of the characteristics investigated was found to be significantly associated with RDT interpretation. With training, most CHWs performing RDTs maintain diagnostic testing competency over at least 12 months. CHWs generally perform RDTs safely and accurately interpret results. Younger age and prior experiences with RDTs were associated with better testing performance. Future research should investigate the mode by which CHW characteristics impact RDT procedures.

  20. Parametric Studies Of Lightweight Reflectors Supported On Linear Actuator Arrays

    NASA Astrophysics Data System (ADS)

    Seibert, George E.

    1987-10-01

    This paper presents the results of numerous design studies carried out at Perkin-Elmer in support of the design of large diameter controllable mirrors for use in laser beam control, surveillance, and astronomy programs. The results include relationships between actuator location and spacing and the associated degree of correctability attainable for a variety of faceplate configurations subjected to typical disturbance environments. Normalizations and design curves obtained from closed-form equations based on thin shallow shell theory and computer based finite-element analyses are presented for use in preliminary design estimates of actuator count, faceplate structural properties, system performance prediction and weight assessments. The results of the analyses were obtained from a very wide range of mirror configurations, including both continuous and segmented mirror geometries. Typically, the designs consisted of a thin facesheet controlled by point force actuators which in turn were mounted on a structurally efficient base panel, or "reaction structure". The faceplate materials considered were fused silica, ULE fused silica, Zerodur, aluminum and beryllium. Thin solid faceplates as well as rib-reinforced cross-sections were treated, with a wide variation in thickness and/or rib patterns. The magnitude and spatial frequency distribution of the residual or uncorrected errors were related to the input error functions for mirrors of many different diameters and focal ratios. The error functions include simple sphere-to-sphere corrections, "parabolization" of spheres, and higher spatial frequency input error maps ranging from 0.5 to 7.5 cycles per diameter. The parameter which dominates all of the results obtained to date, is a structural descriptor of thin shell behavior called the characteristic length. This parameter is a function of the shell's radius of curvature, thickness, and Poisson's ratio of the material used. The value of this constant, in itself, describes the extent to which the deflection under a point force is localized by the shell's curvature. The deflection shape is typically a near-gaussian "bump" with a zero-crossing at a local radius of approximately 3.5 characteristic lengths. The amplitude is a function of the shells elastic modulus, radius, and thickness, and is linearly proportional to the applied force. This basic shell behavior is well-treated in an excellent set of papers by Eric Reissner entitled "Stresses and Small Displacements of Shallow Spherical Shells".1'2 Building on the insight offered by these papers, we developed our design tools around two derived parameters, the ratio of the mirror's diameter to its characteristic length (D/l), and the ratio of the actuator spacing to the characteristic length (b/l). The D/1 ratio determines the "finiteness" of the shell, or its dependence on edge boundary conditions. For D/1 values greater than 10, the influence of edges is almost totally absent on interior behavior. The b/1 ratio, the basis of all our normalizations is the most universal term in the description of correctability or ratio of residual/input errors. The data presented in the paper, shows that the rms residual error divided by the peak amplitude of the input error function is related to the actuator spacing to characteristic length ratio by the following expression RMS Residual Error b 3.5 k (I) (1) Initial Error Ampl. The value of k ranges from approximately 0.001 for low spatial frequency initial errors up to 0.05 for higher error frequencies (e.g. 5 cycles/diameter). The studies also yielded insight to the forces required to produce typical corrections at both the center and edges of the mirror panels. Additionally, the data lends itself to rapid evaluation of the effects of trading faceplate weight for increased actuator count,

  1. Association between split selection instability and predictive error in survival trees.

    PubMed

    Radespiel-Tröger, M; Gefeller, O; Rabenstein, T; Hothorn, T

    2006-01-01

    To evaluate split selection instability in six survival tree algorithms and its relationship with predictive error by means of a bootstrap study. We study the following algorithms: logrank statistic with multivariate p-value adjustment without pruning (LR), Kaplan-Meier distance of survival curves (KM), martingale residuals (MR), Poisson regression for censored data (PR), within-node impurity (WI), and exponential log-likelihood loss (XL). With the exception of LR, initial trees are pruned by using split-complexity, and final trees are selected by means of cross-validation. We employ a real dataset from a clinical study of patients with gallbladder stones. The predictive error is evaluated using the integrated Brier score for censored data. The relationship between split selection instability and predictive error is evaluated by means of box-percentile plots, covariate and cutpoint selection entropy, and cutpoint selection coefficients of variation, respectively, in the root node. We found a positive association between covariate selection instability and predictive error in the root node. LR yields the lowest predictive error, while KM and MR yield the highest predictive error. The predictive error of survival trees is related to split selection instability. Based on the low predictive error of LR, we recommend the use of this algorithm for the construction of survival trees. Unpruned survival trees with multivariate p-value adjustment can perform equally well compared to pruned trees. The analysis of split selection instability can be used to communicate the results of tree-based analyses to clinicians and to support the application of survival trees.

  2. Sphericity determination using resonant ultrasound spectroscopy

    DOEpatents

    Dixon, Raymond D.; Migliori, Albert; Visscher, William M.

    1994-01-01

    A method is provided for grading production quantities of spherical objects, such as roller balls for bearings. A resonant ultrasound spectrum (RUS) is generated for each spherical object and a set of degenerate sphere-resonance frequencies is identified. From the degenerate sphere-resonance frequencies and known relationships between degenerate sphere-resonance frequencies and Poisson's ratio, a Poisson's ratio can be determined, along with a "best" spherical diameter, to form spherical parameters for the sphere. From the RUS, fine-structure resonant frequency spectra are identified for each degenerate sphere-resonance frequency previously selected. From each fine-structure spectrum and associated sphere parameter values an asphericity value is determined. The asphericity value can then be compared with predetermined values to provide a measure for accepting or rejecting the sphere.

  3. Sphericity determination using resonant ultrasound spectroscopy

    DOEpatents

    Dixon, R.D.; Migliori, A.; Visscher, W.M.

    1994-10-18

    A method is provided for grading production quantities of spherical objects, such as roller balls for bearings. A resonant ultrasound spectrum (RUS) is generated for each spherical object and a set of degenerate sphere-resonance frequencies is identified. From the degenerate sphere-resonance frequencies and known relationships between degenerate sphere-resonance frequencies and Poisson's ratio, a Poisson's ratio can be determined, along with a 'best' spherical diameter, to form spherical parameters for the sphere. From the RUS, fine-structure resonant frequency spectra are identified for each degenerate sphere-resonance frequency previously selected. From each fine-structure spectrum and associated sphere parameter values an asphericity value is determined. The asphericity value can then be compared with predetermined values to provide a measure for accepting or rejecting the sphere. 14 figs.

  4. Stability of continuous-time quantum filters with measurement imperfections

    NASA Astrophysics Data System (ADS)

    Amini, H.; Pellegrini, C.; Rouchon, P.

    2014-07-01

    The fidelity between the state of a continuously observed quantum system and the state of its associated quantum filter, is shown to be always a submartingale. The observed system is assumed to be governed by a continuous-time Stochastic Master Equation (SME), driven simultaneously by Wiener and Poisson processes and that takes into account incompleteness and errors in measurements. This stability result is the continuous-time counterpart of a similar stability result already established for discrete-time quantum systems and where the measurement imperfections are modelled by a left stochastic matrix.

  5. Constraining the noise-free distribution of halo spin parameters

    NASA Astrophysics Data System (ADS)

    Benson, Andrew J.

    2017-11-01

    Any measurement made using an N-body simulation is subject to noise due to the finite number of particles used to sample the dark matter distribution function, and the lack of structure below the simulation resolution. This noise can be particularly significant when attempting to measure intrinsically small quantities, such as halo spin. In this work, we develop a model to describe the effects of particle noise on halo spin parameters. This model is calibrated using N-body simulations in which the particle noise can be treated as a Poisson process on the underlying dark matter distribution function, and we demonstrate that this calibrated model reproduces measurements of halo spin parameter error distributions previously measured in N-body convergence studies. Utilizing this model, along with previous measurements of the distribution of halo spin parameters in N-body simulations, we place constraints on the noise-free distribution of halo spins. We find that the noise-free median spin is 3 per cent lower than that measured directly from the N-body simulation, corresponding to a shift of approximately 40 times the statistical uncertainty in this measurement arising purely from halo counting statistics. We also show that measurement of the spin of an individual halo to 10 per cent precision requires at least 4 × 104 particles in the halo - for haloes containing 200 particles, the fractional error on spins measured for individual haloes is of order unity. N-body simulations should be viewed as the results of a statistical experiment applied to a model of dark matter structure formation. When viewed in this way, it is clear that determination of any quantity from such a simulation should be made through forward modelling of the effects of particle noise.

  6. The auxetic behavior of an expanded periodic cellular structure

    NASA Astrophysics Data System (ADS)

    Ciolan, Mihaela A.; Lache, Simona; Velea, Marian N.

    2018-02-01

    Within nowadays research, when it comes to lightweight sandwich panels, periodic cellular structures are considered real trendsetters. One of the most used type of core in producing sandwich panels is the honeycomb. However, due to its relatively high manufacturing cost, this structure has limited applications; therefore, research has been carried out in order to develop alternative solutions. An example in this sense is the ExpaAsym cellular structure, developed at the Transilvania University of Braşov; it represents a periodic cellular structure manufactured through a mechanically expansion process of a previously cut and perforated sheet material. The relative density of the structure was proven to be significantly lower than the one of the honeycomb. This gives a great advantage to the structure, due to the fact that when the internal angle A of the unit cell is 60°, after the mechanical expansion it results a hexagonal structure. The main objective of this paper is to estimate the in-plane Poisson ratios of the structure, in terms of its geometrical parameters. It is therefore analytically shown that for certain values of the geometric parameters, the in-plane Poisson ratios have negative values when the internal angle exceeds 90°, which determines its auxetic behavior.

  7. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  8. Analysis of counting errors in the phase/Doppler particle analyzer

    NASA Technical Reports Server (NTRS)

    Oldenburg, John R.

    1987-01-01

    NASA is investigating the application of the Phase Doppler measurement technique to provide improved drop sizing and liquid water content measurements in icing research. The magnitude of counting errors were analyzed because these errors contribute to inaccurate liquid water content measurements. The Phase Doppler Particle Analyzer counting errors due to data transfer losses and coincidence losses were analyzed for data input rates from 10 samples/sec to 70,000 samples/sec. Coincidence losses were calculated by determining the Poisson probability of having more than one event occurring during the droplet signal time. The magnitude of the coincidence loss can be determined, and for less than a 15 percent loss, corrections can be made. The data transfer losses were estimated for representative data transfer rates. With direct memory access enabled, data transfer losses are less than 5 percent for input rates below 2000 samples/sec. With direct memory access disabled losses exceeded 20 percent at a rate of 50 samples/sec preventing accurate number density or mass flux measurements. The data transfer losses of a new signal processor were analyzed and found to be less than 1 percent for rates under 65,000 samples/sec.

  9. Multipolar Ewald methods, 1: theory, accuracy, and performance.

    PubMed

    Giese, Timothy J; Panteva, Maria T; Chen, Haoyuan; York, Darrin M

    2015-02-10

    The Ewald, Particle Mesh Ewald (PME), and Fast Fourier–Poisson (FFP) methods are developed for systems composed of spherical multipole moment expansions. A unified set of equations is derived that takes advantage of a spherical tensor gradient operator formalism in both real space and reciprocal space to allow extension to arbitrary multipole order. The implementation of these methods into a novel linear-scaling modified “divide-and-conquer” (mDC) quantum mechanical force field is discussed. The evaluation times and relative force errors are compared between the three methods, as a function of multipole expansion order. Timings and errors are also compared within the context of the quantum mechanical force field, which encounters primary errors related to the quality of reproducing electrostatic forces for a given density matrix and secondary errors resulting from the propagation of the approximate electrostatics into the self-consistent field procedure, which yields a converged, variational, but nonetheless approximate density matrix. Condensed-phase simulations of an mDC water model are performed with the multipolar PME method and compared to an electrostatic cutoff method, which is shown to artificially increase the density of water and heat of vaporization relative to full electrostatic treatment.

  10. Error Recovery in the Time-Triggered Paradigm with FTT-CAN.

    PubMed

    Marques, Luis; Vasconcelos, Verónica; Pedreiras, Paulo; Almeida, Luís

    2018-01-11

    Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots.

  11. Error Recovery in the Time-Triggered Paradigm with FTT-CAN

    PubMed Central

    Pedreiras, Paulo; Almeida, Luís

    2018-01-01

    Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots. PMID:29324723

  12. Rapid computation of directional wellbore drawdown in a confined aquifer via Poisson resummation

    NASA Astrophysics Data System (ADS)

    Blumenthal, Benjamin J.; Zhan, Hongbin

    2016-08-01

    We have derived a rapidly computed analytical solution for drawdown caused by a partially or fully penetrating directional wellbore (vertical, horizontal, or slant) via Green's function method. The mathematical model assumes an anisotropic, homogeneous, confined, box-shaped aquifer. Any dimension of the box can have one of six possible boundary conditions: 1) both sides no-flux; 2) one side no-flux - one side constant-head; 3) both sides constant-head; 4) one side no-flux; 5) one side constant-head; 6) free boundary conditions. The solution has been optimized for rapid computation via Poisson Resummation, derivation of convergence rates, and numerical optimization of integration techniques. Upon application of the Poisson Resummation method, we were able to derive two sets of solutions with inverse convergence rates, namely an early-time rapidly convergent series (solution-A) and a late-time rapidly convergent series (solution-B). From this work we were able to link Green's function method (solution-B) back to image well theory (solution-A). We then derived an equation defining when the convergence rate between solution-A and solution-B is the same, which we termed the switch time. Utilizing the more rapidly convergent solution at the appropriate time, we obtained rapid convergence at all times. We have also shown that one may simplify each of the three infinite series for the three-dimensional solution to 11 terms and still maintain a maximum relative error of less than 10-14.

  13. Evaluation of the accuracy of the Rotating Parallel Ray Omnidirectional Integration for instantaneous pressure reconstruction from the measured pressure gradient

    NASA Astrophysics Data System (ADS)

    Moreto, Jose; Liu, Xiaofeng

    2017-11-01

    The accuracy of the Rotating Parallel Ray omnidirectional integration for pressure reconstruction from the measured pressure gradient (Liu et al., AIAA paper 2016-1049) is evaluated against both the Circular Virtual Boundary omnidirectional integration (Liu and Katz, 2006 and 2013) and the conventional Poisson equation approach. Dirichlet condition at one boundary point and Neumann condition at all other boundary points are applied to the Poisson solver. A direct numerical simulation database of isotropic turbulence flow (JHTDB), with a homogeneously distributed random noise added to the entire field of DNS pressure gradient, is used to assess the performance of the methods. The random noise, generated by the Matlab function Rand, has a magnitude varying randomly within the range of +/-40% of the maximum DNS pressure gradient. To account for the effect of the noise distribution pattern on the reconstructed pressure accuracy, a total of 1000 different noise distributions achieved by using different random number seeds are involved in the evaluation. Final results after averaging the 1000 realizations show that the error of the reconstructed pressure normalized by the DNS pressure variation range is 0.15 +/-0.07 for the Poisson equation approach, 0.028 +/-0.003 for the Circular Virtual Boundary method and 0.027 +/-0.003 for the Rotating Parallel Ray method, indicating the robustness of the Rotating Parallel Ray method in pressure reconstruction. Sponsor: The San Diego State University UGP program.

  14. Post-stratification sampling in small area estimation (SAE) model for unemployment rate estimation by Bayes approach

    NASA Astrophysics Data System (ADS)

    Hanike, Yusrianti; Sadik, Kusman; Kurnia, Anang

    2016-02-01

    This research implemented unemployment rate in Indonesia that based on Poisson distribution. It would be estimated by modified the post-stratification and Small Area Estimation (SAE) model. Post-stratification was one of technique sampling that stratified after collected survey data. It's used when the survey data didn't serve for estimating the interest area. Interest area here was the education of unemployment which separated in seven category. The data was obtained by Labour Employment National survey (Sakernas) that's collected by company survey in Indonesia, BPS, Statistic Indonesia. This company served the national survey that gave too small sample for level district. Model of SAE was one of alternative to solved it. According the problem above, we combined this post-stratification sampling and SAE model. This research gave two main model of post-stratification sampling. Model I defined the category of education was the dummy variable and model II defined the category of education was the area random effect. Two model has problem wasn't complied by Poisson assumption. Using Poisson-Gamma model, model I has over dispersion problem was 1.23 solved to 0.91 chi square/df and model II has under dispersion problem was 0.35 solved to 0.94 chi square/df. Empirical Bayes was applied to estimate the proportion of every category education of unemployment. Using Bayesian Information Criteria (BIC), Model I has smaller mean square error (MSE) than model II.

  15. SELF-GRAVITATIONAL FORCE CALCULATION OF SECOND-ORDER ACCURACY FOR INFINITESIMALLY THIN GASEOUS DISKS IN POLAR COORDINATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsiang-Hsu; Taam, Ronald E.; Yen, David C. C., E-mail: yen@math.fju.edu.tw

    Investigating the evolution of disk galaxies and the dynamics of proto-stellar disks can involve the use of both a hydrodynamical and a Poisson solver. These systems are usually approximated as infinitesimally thin disks using two-dimensional Cartesian or polar coordinates. In Cartesian coordinates, the calculations of the hydrodynamics and self-gravitational forces are relatively straightforward for attaining second-order accuracy. However, in polar coordinates, a second-order calculation of self-gravitational forces is required for matching the second-order accuracy of hydrodynamical schemes. We present a direct algorithm for calculating self-gravitational forces with second-order accuracy without artificial boundary conditions. The Poisson integral in polar coordinates ismore » expressed in a convolution form and the corresponding numerical complexity is nearly linear using a fast Fourier transform. Examples with analytic solutions are used to verify that the truncated error of this algorithm is of second order. The kernel integral around the singularity is applied to modify the particle method. The use of a softening length is avoided and the accuracy of the particle method is significantly improved.« less

  16. Atomic Charge Parameters for the Finite Difference Poisson-Boltzmann Method Using Electronegativity Neutralization.

    PubMed

    Yang, Qingyi; Sharp, Kim A

    2006-07-01

    An optimization of Rappe and Goddard's charge equilibration (QEq) method of assigning atomic partial charges is described. This optimization is designed for fast and accurate calculation of solvation free energies using the finite difference Poisson-Boltzmann (FDPB) method. The optimization is performed against experimental small molecule solvation free energies using the FDPB method and adjusting Rappe and Goddard's atomic electronegativity values. Using a test set of compounds for which experimental solvation energies are available and a rather small number of parameters, very good agreement was obtained with experiment, with a mean unsigned error of about 0.5 kcal/mol. The QEq atomic partial charge assignment method can reflect the effects of the conformational changes and solvent induction on charge distribution in molecules. In the second section of the paper we examined this feature with a study of the alanine dipeptide conformations in water solvent. The different contributions to the energy surface of the dipeptide were examined and compared with the results from fixed CHARMm charge potential, which is widely used for molecular dynamics studies.

  17. Three-dimensionally bonded spongy graphene material with super compressive elasticity and near-zero Poisson's ratio.

    PubMed

    Wu, Yingpeng; Yi, Ningbo; Huang, Lu; Zhang, Tengfei; Fang, Shaoli; Chang, Huicong; Li, Na; Oh, Jiyoung; Lee, Jae Ah; Kozlov, Mikhail; Chipara, Alin C; Terrones, Humberto; Xiao, Peishuang; Long, Guankui; Huang, Yi; Zhang, Fan; Zhang, Long; Lepró, Xavier; Haines, Carter; Lima, Márcio Dias; Lopez, Nestor Perea; Rajukumar, Lakshmy P; Elias, Ana L; Feng, Simin; Kim, Seon Jeong; Narayanan, N T; Ajayan, Pulickel M; Terrones, Mauricio; Aliev, Ali; Chu, Pengfei; Zhang, Zhong; Baughman, Ray H; Chen, Yongsheng

    2015-01-20

    It is a challenge to fabricate graphene bulk materials with properties arising from the nature of individual graphene sheets, and which assemble into monolithic three-dimensional structures. Here we report the scalable self-assembly of randomly oriented graphene sheets into additive-free, essentially homogenous graphene sponge materials that provide a combination of both cork-like and rubber-like properties. These graphene sponges, with densities similar to air, display Poisson's ratios in all directions that are near-zero and largely strain-independent during reversible compression to giant strains. And at the same time, they function as enthalpic rubbers, which can recover up to 98% compression in air and 90% in liquids, and operate between -196 and 900 °C. Furthermore, these sponges provide reversible liquid absorption for hundreds of cycles and then discharge it within seconds, while still providing an effective near-zero Poisson's ratio.

  18. Statistical shape analysis using 3D Poisson equation--A quantitatively validated approach.

    PubMed

    Gao, Yi; Bouix, Sylvain

    2016-05-01

    Statistical shape analysis has been an important area of research with applications in biology, anatomy, neuroscience, agriculture, paleontology, etc. Unfortunately, the proposed methods are rarely quantitatively evaluated, and as shown in recent studies, when they are evaluated, significant discrepancies exist in their outputs. In this work, we concentrate on the problem of finding the consistent location of deformation between two population of shapes. We propose a new shape analysis algorithm along with a framework to perform a quantitative evaluation of its performance. Specifically, the algorithm constructs a Signed Poisson Map (SPoM) by solving two Poisson equations on the volumetric shapes of arbitrary topology, and statistical analysis is then carried out on the SPoMs. The method is quantitatively evaluated on synthetic shapes and applied on real shape data sets in brain structures. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. A fourth order PDE based fuzzy c- means approach for segmentation of microscopic biopsy images in presence of Poisson noise for cancer detection.

    PubMed

    Kumar, Rajesh; Srivastava, Subodh; Srivastava, Rajeev

    2017-07-01

    For cancer detection from microscopic biopsy images, image segmentation step used for segmentation of cells and nuclei play an important role. Accuracy of segmentation approach dominate the final results. Also the microscopic biopsy images have intrinsic Poisson noise and if it is present in the image the segmentation results may not be accurate. The objective is to propose an efficient fuzzy c-means based segmentation approach which can also handle the noise present in the image during the segmentation process itself i.e. noise removal and segmentation is combined in one step. To address the above issues, in this paper a fourth order partial differential equation (FPDE) based nonlinear filter adapted to Poisson noise with fuzzy c-means segmentation method is proposed. This approach is capable of effectively handling the segmentation problem of blocky artifacts while achieving good tradeoff between Poisson noise removals and edge preservation of the microscopic biopsy images during segmentation process for cancer detection from cells. The proposed approach is tested on breast cancer microscopic biopsy data set with region of interest (ROI) segmented ground truth images. The microscopic biopsy data set contains 31 benign and 27 malignant images of size 896 × 768. The region of interest selected ground truth of all 58 images are also available for this data set. Finally, the result obtained from proposed approach is compared with the results of popular segmentation algorithms; fuzzy c-means, color k-means, texture based segmentation, and total variation fuzzy c-means approaches. The experimental results shows that proposed approach is providing better results in terms of various performance measures such as Jaccard coefficient, dice index, Tanimoto coefficient, area under curve, accuracy, true positive rate, true negative rate, false positive rate, false negative rate, random index, global consistency error, and variance of information as compared to other segmentation approaches used for cancer detection. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Estimating and Separating Noise from AIA Images

    NASA Astrophysics Data System (ADS)

    Kirk, Michael S.; Ireland, Jack; Young, C. Alex; Pesnell, W. Dean

    2016-10-01

    All digital images are corrupted by noise and SDO AIA is no different. In most solar imaging, we have the luxury of high photon counts and low background contamination, which when combined with carful calibration, minimize much of the impact noise has on the measurement. Outside high-intensity regions, such as in coronal holes, the noise component can become significant and complicate feature recognition and segmentation. We create a practical estimate of noise in the high-resolution AIA images across the detector CCD in all seven EUV wavelengths. A mixture of Poisson and Gaussian noise is well suited in the digital imaging environment due to the statistical distributions of photons and the characteristics of the CCD. Using state-of-the-art noise estimation techniques, the publicly available solar images, and coronal loop simulations; we construct a maximum-a-posteriori assessment of the error in these images. The estimation and mitigation of noise not only provides a clearer view of large-scale solar structure in the solar corona, but also provides physical constraints on fleeting EUV features observed with AIA.

  1. On the applicability of the standard kinetic theory to the study of nanoplasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Angola, A., E-mail: antonio.dangola@unibas.it; Boella, E.; GoLP/Instituto de Plasmas e Fusão Nuclear-Laboratório Associado, Instituto Superior Técnico, Avenida Rovisco Pais 1-1049-001 Lisboa

    Kinetic theory applies to systems with a large number of particles, while nanoplasma generated by the interaction of ultra–short laser pulses with atomic clusters are systems composed by a relatively small number (10{sup 2} ÷ 10{sup 4}) of electrons and ions. In the paper, the applicability of the kinetic theory for studying nanoplasmas is discussed. In particular, two typical phenomena are investigated: the collisionless expansion of electrons in a spherical nanoplasma with immobile ions and the formation of shock shells during Coulomb explosions. The analysis, which is carried out comparing ensemble averages obtained by solving the exact equations of motionmore » with reference solutions of the Vlasov-Poisson model, shows that for the dynamics of the electrons the error of the usually employed models is of the order of few percents (but the standard deviation in a single experiment can be of the order of 10%). Instead, special care must be taken in the study of shock formation, as the discrete structure of the electric charge can destroy or strongly modify the phenomenon.« less

  2. Weather radar equation and a receiver calibration based on a slice approach

    NASA Astrophysics Data System (ADS)

    Yurchak, B. S.

    2012-12-01

    Two circumstances are essential when exploiting radar measurement of precipitation. The first circumstance is a correct physical-mathematical model linking parameters of a rainfall microstructure with a magnitude of a return signal (the weather radar equation (WRE)). The second is a precise measurement of received power that is fitted by a calibration of radar receiver. WRE for the spatially extended geophysical target (SEGT), such as cloud or rain, has been derived based on "slice" approach [1]. In this approach, the particles located close to the wavefront of the radar illumination are assumed to produce backscatter that is mainly coherent. This approach allows the contribution of the microphysical parameters of the scattering media to the radar cross section to be more comprehensive than the model based on the incoherent approach (e.g., Probert-Jones equation (PJE)). In the particular case, when the particle number fluctuations within slices pertain the Poisson law, the WRE derived is transformed to PJE. When Poisson index (standard deviation / mean number of particles) of a slice deviates from 1, the deviation of return power estimated by PJE from the actual value varies from +8 dB to - 12 dB. In general, the backscatter depends on mean, variance and third moment of the particle size distribution function (PSDF). The incoherent approach assumes only dependence on the sixth moment of PSDF (radar reflectivity Z). Additional difference from the classical estimate can be caused by a correlation between slice field reflectivity [2]. Overall, the deviation in particle statistics of a slice from the Poisson law is one of main physical factors that contribute to errors in radar precipitation measurements based on Z-conception. One of the components of calibration error is caused by difference between processing by weather radar receiver of the calibration pulse, and actual return signal from SEGT. A receiver with non uniform amplitude-frequency response (AFR) processes these signals with the same input power but with different radio-frequency spectrums (RFS). This causes different output magnitude due to different distortion experienced while RFS passing through a receiver filter. To assess the calibration error, RFS of signals from SEGT has been studied including theoretical, experimental and simulation stages [3]. It is shown that the return signal carrier wave is phase modulated due to overlapping of replicas of RF-probing pulse reflected from SEGT's slices. The RFSs depends on the phase statistics of the carrier wave and on RFS of the probing pulse. The bandwidth of SEGT's RFS is not greater than that of the probing pulse. Typical phase correlation interval was found to be around the same as that of the probing pulse duration. Application of a long calibration signal (proportional to SEGT extension) causes the error up to -1 dB for conventional radar with matched filter. To eliminate the calibration error, a power estimate of individual return waveform should be corrected with the transformation loss coefficient calculated based on RFS and AFR parameters. To embrace with calibration the high and low frequency parts of a receiver, the calibration should be performed with a long pulse. That long pulse is composed from adjoining replicas of a probe pulse with random initial phases and having the same magnitude governed by the power of probe pulse.

  3. Zeroth Poisson Homology, Foliated Cohomology and Perfect Poisson Manifolds

    NASA Astrophysics Data System (ADS)

    Martínez-Torres, David; Miranda, Eva

    2018-01-01

    We prove that, for compact regular Poisson manifolds, the zeroth homology group is isomorphic to the top foliated cohomology group, and we give some applications. In particular, we show that, for regular unimodular Poisson manifolds, top Poisson and foliated cohomology groups are isomorphic. Inspired by the symplectic setting, we define what a perfect Poisson manifold is. We use these Poisson homology computations to provide families of perfect Poisson manifolds.

  4. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  5. Design and testing of focusing magnets for a compact electron linac

    NASA Astrophysics Data System (ADS)

    Chen, Qushan; Qin, Bin; Liu, Kaifeng; Liu, Xu; Fu, Qiang; Tan, Ping; Hu, Tongning; Pei, Yuanji

    2015-10-01

    Solenoid field errors have great influence on electron beam qualities. In this paper, design and testing of high precision solenoids for a compact electron linac is presented. We proposed an efficient and practical method to solve the peak field of the solenoid for relativistic electron beams based on the reduced envelope equation. Beam dynamics involving space charge force were performed to predict the focusing effects. Detailed optimization methods were introduced to achieve an ultra-compact configuration as well as high accuracy, with the help of the POISSON and OPERA packages. Efforts were attempted to restrain system errors in the off-line testing, which showed the short lens and the main solenoid produced a peak field of 0.13 T and 0.21 T respectively. Data analysis involving central and off axes was carried out and demonstrated that the testing results fitted well with the design.

  6. Sampling errors in the measurement of rain and hail parameters

    NASA Technical Reports Server (NTRS)

    Gertzman, H. S.; Atlas, D.

    1977-01-01

    Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.

  7. Hamiltonian structure of the Lotka-Volterra equations

    NASA Astrophysics Data System (ADS)

    Nutku, Y.

    1990-03-01

    The Lotka-Volterra equations governing predator-prey relations are shown to admit Hamiltonian structure with respect to a generalized Poisson bracket. These equations provide an example of a system for which the naive criterion for the existence of Hamiltonian structure fails. We show further that there is a three-component generalization of the Lotka-Volterra equations which is a bi-Hamiltonian system.

  8. Multiparameter linear least-squares fitting to Poisson data one count at a time

    NASA Technical Reports Server (NTRS)

    Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.

    1995-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 keV, HEAO 3) energy channels of a Ge spectrometer, where the expected number of counts obtained per scan may be very low. Such an analysis system is discussed and compared to the method previously used.

  9. Six-component semi-discrete integrable nonlinear Schrödinger system

    NASA Astrophysics Data System (ADS)

    Vakhnenko, Oleksiy O.

    2018-01-01

    We suggest the six-component integrable nonlinear system on a quasi-one-dimensional lattice. Due to its symmetrical form, the general system permits a number of reductions; one of which treated as the semi-discrete integrable nonlinear Schrödinger system on a lattice with three structural elements in the unit cell is considered in considerable details. Besides six truly independent basic field variables, the system is characterized by four concomitant fields whose background values produce three additional types of inter-site resonant interactions between the basic fields. As a result, the system dynamics becomes associated with the highly nonstandard form of Poisson structure. The elementary Poisson brackets between all field variables are calculated and presented explicitly. The richness of system dynamics is demonstrated on the multi-component soliton solution written in terms of properly parameterized soliton characteristics.

  10. Unique Zigzag-Shaped Buckling Zn2C Monolayer with Strain-Tunable Band Gap and Negative Poisson Ratio.

    PubMed

    Meng, Lingbiao; Zhang, Yingjuan; Zhou, Minjie; Zhang, Jicheng; Zhou, Xiuwen; Ni, Shuang; Wu, Weidong

    2018-02-19

    Designing new materials with reduced dimensionality and distinguished properties has continuously attracted intense interest for materials innovation. Here we report a novel two-dimensional (2D) Zn 2 C monolayer nanomaterial with exceptional structure and properties by means of first-principles calculations. This new Zn 2 C monolayer is composed of quasi-tetrahedral tetracoordinate carbon and quasi-linear bicoordinate zinc, featuring a peculiar zigzag-shaped buckling configuration. The unique coordinate topology endows this natural 2D semiconducting monolayer with strongly strain tunable band gap and unusual negative Poisson ratios. The monolayer has good dynamic and thermal stabilities and is also the lowest-energy structure of 2D space indicated by the particle-swarm optimization (PSO) method, implying its synthetic feasibility. With these intriguing properties the material may find applications in nanoelectronics and micromechanics.

  11. Reconstructing Information in Large-Scale Structure via Logarithmic Mapping

    NASA Astrophysics Data System (ADS)

    Szapudi, Istvan

    We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out practical methods, with the ultimate goal of cosmological parameter estimation. We will quantify with standard MCMC and Fisher methods (including DETF Figure of merit when applicable) the efficiency of our estimators, comparing with the conventional method, that uses the un-transformed field. Preliminary results indicate that the increase for NASA's WFIRST in the DETF Figure of Merit would be 1.5-4.2 using a range of pessimistic to optimistic assumptions, respectively.

  12. DISCRETE COMPOUND POISSON PROCESSES AND TABLES OF THE GEOMETRIC POISSON DISTRIBUTION.

    DTIC Science & Technology

    A concise summary of the salient properties of discrete Poisson processes , with emphasis on comparing the geometric and logarithmic Poisson processes . The...the geometric Poisson process are given for 176 sets of parameter values. New discrete compound Poisson processes are also introduced. These...processes have properties that are particularly relevant when the summation of several different Poisson processes is to be analyzed. This study provides the

  13. Developing descriptors to predict mechanical properties of nanotubes.

    PubMed

    Borders, Tammie L; Fonseca, Alexandre F; Zhang, Hengji; Cho, Kyeongjae; Rusinko, Andrew

    2013-04-22

    Descriptors and quantitative structure property relationships (QSPR) were investigated for mechanical property prediction of carbon nanotubes (CNTs). 78 molecular dynamics (MD) simulations were carried out, and 20 descriptors were calculated to build quantitative structure property relationships (QSPRs) for Young's modulus and Poisson's ratio in two separate analyses: vacancy only and vacancy plus methyl functionalization. In the first analysis, C(N2)/C(T) (number of non-sp2 hybridized carbons per the total carbons) and chiral angle were identified as critical descriptors for both Young's modulus and Poisson's ratio. Further analysis and literature findings indicate the effect of chiral angle is negligible at larger CNT radii for both properties. Raman spectroscopy can be used to measure C(N2)/C(T), providing a direct link between experimental and computational results. Poisson's ratio approaches two different limiting values as CNT radii increases: 0.23-0.25 for chiral and armchair CNTs and 0.10 for zigzag CNTs (surface defects <3%). In the second analysis, the critical descriptors were C(N2)/C(T), chiral angle, and M(N)/C(T) (number of methyl groups per total carbons). These results imply new types of defects can be represented as a new descriptor in QSPR models. Finally, results are qualified and quantified against experimental data.

  14. Evaluating for a geospatial relationship between radon levels and thyroid cancer in Pennsylvania.

    PubMed

    Goyal, Neerav; Camacho, Fabian; Mangano, Joseph; Goldenberg, David

    2015-01-01

    To determine whether there is an association between radon levels and the rise in incidence of thyroid cancer in Pennsylvania. Epidemiological study of the state of Pennsylvania. We used information from the Pennsylvania Cancer Registry and the Pennsylvania Department of Energy. From the registry, information regarding thyroid incidence by county and zip code was recorded. Information regarding radon levels per county was recorded from the state. Poisson regression models were fit predicting county-level thyroid incidence and change as a function of radon/lagged radon levels. To account for measurement error in the radon levels, a Bayesian Model extending the Poisson models was fit. Geospatial clustering analysis was also performed. No association was noted between cumulative radon levels and thyroid incidence. In the Poisson modeling, no significant association was noted between county radon level and thyroid cancer incidence (P = .23). Looking for a lag between the radon level and its effect, no significant effect was seen with a lag of 0 to 6 years between exposure and effect (P = .063 to P = .59). The Bayesian models also failed to show a statistically significant association. A cluster of high thyroid cancer incidence was found in western Pennsylvania. Through a variety of models, no association was elicited between annual radon levels recorded in Pennsylvania and the rising incidence of thyroid cancer. However, a cluster of thyroid cancer incidence was found in western Pennsylvania. Further studies may be helpful in looking for other exposures or associations. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  15. A comparison of methods for the analysis of binomial clustered outcomes in behavioral research.

    PubMed

    Ferrari, Alberto; Comelli, Mario

    2016-12-01

    In behavioral research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This clustered binary data are usually non-normally distributed, which can distort inference if the usual general linear model is applied and sample size is small. A number of more advanced methods is available, but they are often technically challenging and a comparative assessment of their performances in behavioral setups has not been performed. We studied the performances of some methods applicable to the analysis of proportions; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report on a simulation study evaluating power and Type I error rate of these models in hypothetical scenarios met by behavioral researchers; plus, we describe results from the application of these methods on data from real experiments. Our results show that, while GLMMs are powerful instruments for the analysis of clustered binary outcomes, beta-binomial regression can outperform them in a range of scenarios. Linear regression gave results consistent with the nominal level of significance, but was overall less powerful. Poisson regression, instead, mostly led to anticonservative inference. GLMMs and beta-binomial regression are generally more powerful than linear regression; yet linear regression is robust to model misspecification in some conditions, whereas Poisson regression suffers heavily from violations of the assumptions when used to model proportion data. We conclude providing directions to behavioral scientists dealing with clustered binary data and small sample sizes. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Experimental study of evaluation of mechanical parameters of heterogeneous porous structure

    NASA Astrophysics Data System (ADS)

    Gerasimov, O.; Koroleva, E.; Sachenkov, O.

    2017-06-01

    The paper deals with the problem of determining the mechanical macroparameters of the porous material in case of knowing the information about it’s structure. Fabric tensor and porosity was used to describe structure of the material. Experimental study presented. In research two-component liquid polyurethane plastics of cold curing Lasilcast (Lc-12) was used. Then samples was scanned on computer tomography. Resulting data was analyzed. Regular subvolume was cut out after analyses. Then mechanical tests was performed. As a result we get information about fabric tensor, porosity, Young’s modulus and Poisson ratio of the sample. In the abstract presented results for some samples. Taking into account the law of porosity variation, we considered the problem of evaluating the mechanical macro parameters depending on the nature of the porous structure. To evaluate the macroparameters, we built the dependence of the Young’s modules and Poisson ratio of the material on the rotation angle α and the pore ellipticity parameter λ. The sensitivity of the deformations to the elastic constants was also estimated.

  17. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks IV: structuring synaptic pathways among recurrent connections.

    PubMed

    Gilson, Matthieu; Burkitt, Anthony N; Grayden, David B; Thomas, Doreen A; van Hemmen, J Leo

    2009-12-01

    In neuronal networks, the changes of synaptic strength (or weight) performed by spike-timing-dependent plasticity (STDP) are hypothesized to give rise to functional network structure. This article investigates how this phenomenon occurs for the excitatory recurrent connections of a network with fixed input weights that is stimulated by external spike trains. We develop a theoretical framework based on the Poisson neuron model to analyze the interplay between the neuronal activity (firing rates and the spike-time correlations) and the learning dynamics, when the network is stimulated by correlated pools of homogeneous Poisson spike trains. STDP can lead to both a stabilization of all the neuron firing rates (homeostatic equilibrium) and a robust weight specialization. The pattern of specialization for the recurrent weights is determined by a relationship between the input firing-rate and correlation structures, the network topology, the STDP parameters and the synaptic response properties. We find conditions for feed-forward pathways or areas with strengthened self-feedback to emerge in an initially homogeneous recurrent network.

  18. Method for resonant measurement

    DOEpatents

    Rhodes, G.W.; Migliori, A.; Dixon, R.D.

    1996-03-05

    A method of measurement of objects to determine object flaws, Poisson`s ratio ({sigma}) and shear modulus ({mu}) is shown and described. First, the frequency for expected degenerate responses is determined for one or more input frequencies and then splitting of degenerate resonant modes are observed to identify the presence of flaws in the object. Poisson`s ratio and the shear modulus can be determined by identification of resonances dependent only on the shear modulus, and then using that shear modulus to find Poisson`s ratio using other modes dependent on both the shear modulus and Poisson`s ratio. 1 fig.

  19. Hole-ness of point clouds

    NASA Astrophysics Data System (ADS)

    Gronz, Oliver; Seeger, Manuel; Klaes, Björn; Casper, Markus C.; Ries, Johannes B.

    2015-04-01

    Accurate and dense 3D models of soil surfaces can be used in various ways: They can be used as initial shapes for erosion models. They can be used as benchmark shapes for erosion model outputs. They can be used to derive metrics, such as random roughness... One easy and low-cost method to produce these models is structure from motion (SfM). Using this method, two questions arise: Does the soil moisture, which changes the colour, albedo and reflectivity of the soil, influence the model quality? How can the model quality be evaluated? To answer these questions, a suitable data set has been produced: soil has been placed on a tray and areas with different roughness structures have been formed. For different moisture states - dry, medium, saturated - and two different lighting conditions - direct and indirect - sets of high-resolution images at the same camera positions have been taken. From the six image sets, 3D point clouds have been produced using VisualSfM. The visual inspection of the 3D models showed that all models have different areas, where holes of different sizes occur. But it is obviously a subjective task to determine the model's quality by visual inspection. One typical approach to evaluate model quality objectively is to estimate the point density on a regular, two-dimensional grid: the number of 3D points in each grid cell projected on a plane is calculated. This works well for surfaces that do not show vertical structures. Along vertical structures, many points will be projected on the same grid cell and thus the point density rather depends on the shape of the surface but less on the quality of the model. Another approach has been applied by using the points resulting from Poisson Surface Reconstructions. One of this algorithm's properties is the filling of holes: new points are interpolated inside the holes. Using the original 3D point cloud and the interpolated Poisson point set, two analyses have been performed: For all Poisson points, the distance to the closest original point cloud member has been calculated. For the resulting set of distances, histograms have been produced that show the distribution of point distances. As the Poisson points also make up a connected mesh, the size and distribution of single holes can also be estimated by labeling Poisson points that belong to the same hole: each hole gets a specific number. Afterwards, the area of the mesh formed by each set of Poisson hole points can be calculated. The result is a set of distinctive holes and their sizes. The two approaches showed that the hole-ness of the point cloud depends on the soil moisture respectively the reflectivity: the distance distribution of the model of the saturated soil shows the smallest number of large distances. The histogram of the medium state shows more large distances and the dry model shows the largest distances. Models resulting from indirect lighting are better than the models resulting from direct light for all moisture states.

  20. Negative Binomial Process Count and Mixture Modeling.

    PubMed

    Zhou, Mingyuan; Carin, Lawrence

    2015-02-01

    The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.

  1. Assessment of Linear Finite-Difference Poisson-Boltzmann Solvers

    PubMed Central

    Wang, Jun; Luo, Ray

    2009-01-01

    CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. In this study we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified set of biomolecular structures. Our comparative analysis shows that modified incomplete Cholesky conjugate gradient and geometric multigrid are the most efficient in the diversified test set. For the two efficient solvers, our test shows that their CPU times increase approximately linearly with the numbers of grids. Their CPU times also increase almost linearly with the negative logarithm of the convergence criterion at very similar rate. Our comparison further shows that geometric multigrid performs better in the large set of tested biomolecules. However, modified incomplete Cholesky conjugate gradient is superior to geometric multigrid in molecular dynamics simulations of tested molecules. We also investigated other significant components in numerical solutions of the Poisson-Boltzmann equation. It turns out that the time-limiting step is the free boundary condition setup for the linear systems for the selected proteins if the electrostatic focusing is not used. Thus, development of future numerical solvers for the Poisson-Boltzmann equation should balance all aspects of the numerical procedures in realistic biomolecular applications. PMID:20063271

  2. Poisson's ratio over two centuries: challenging hypotheses

    PubMed Central

    Greaves, G. Neville

    2013-01-01

    This article explores Poisson's ratio, starting with the controversy concerning its magnitude and uniqueness in the context of the molecular and continuum hypotheses competing in the development of elasticity theory in the nineteenth century, moving on to its place in the development of materials science and engineering in the twentieth century, and concluding with its recent re-emergence as a universal metric for the mechanical performance of materials on any length scale. During these episodes France lost its scientific pre-eminence as paradigms switched from mathematical to observational, and accurate experiments became the prerequisite for scientific advance. The emergence of the engineering of metals followed, and subsequently the invention of composites—both somewhat separated from the discovery of quantum mechanics and crystallography, and illustrating the bifurcation of technology and science. Nowadays disciplines are reconnecting in the face of new scientific demands. During the past two centuries, though, the shape versus volume concept embedded in Poisson's ratio has remained invariant, but its application has exploded from its origins in describing the elastic response of solids and liquids, into areas such as materials with negative Poisson's ratio, brittleness, glass formation, and a re-evaluation of traditional materials. Moreover, the two contentious hypotheses have been reconciled in their complementarity within the hierarchical structure of materials and through computational modelling. PMID:24687094

  3. Crustal structure in Ethiopia and Kenya from receiver function analysis: Implications for rift development in eastern Africa

    NASA Astrophysics Data System (ADS)

    Dugda, Mulugeta T.; Nyblade, Andrew A.; Julia, Jordi; Langston, Charles A.; Ammon, Charles J.; Simiyu, Silas

    2005-01-01

    Crustal structure in Kenya and Ethiopia has been investigated using receiver function analysis of broadband seismic data to determine the extent to which the Cenozoic rifting and magmatism has modified the thickness and composition of the Proterozoic crust in which the East African rift system developed. Data for this study come from broadband seismic experiments conducted in Ethiopia between 2000 and 2002 and in Kenya between 2001 and 2002. Two methods have been used to analyze the receiver functions, the H-κ method, and direct stacks of the waveforms, yielding consistent results. Crustal thickness to the east of the Kenya rift varies between 39 and 42 km, and Poisson's ratios for the crust vary between 0.24 and 0.27. To the west of the Kenya rift, Moho depths vary between 37 and 38 km, and Poisson's ratios vary between 0.24 and 0.27. These findings support previous studies showing that crust away from the Kenya rift has not been modified extensively by Cenozoic rifting and magmatism. Beneath the Ethiopian Plateau on either side of the Main Ethiopian Rift, crustal thickness ranges from 33 to 44 km, and Poisson's ratios vary from 0.23 to 0.28. Within the Main Ethiopian Rift, Moho depths vary from 27 to 38 km, and Poisson's ratios range from 0.27 to 0.35. A crustal thickness of 25 km and a Poisson's ratio of 0.36 were obtained for a single station in the Afar Depression. These results indicate that the crust beneath the Ethiopian Plateau has not been modified significantly by the Cenozoic rifting and magmatism, even though up to a few kilometers of flood basalts have been added, and that the crust beneath the rifted regions in Ethiopia has been thinned in many places and extensively modified by the addition of mafic rock. The latter finding is consistent with models for rift evolution, suggesting that magmatic segments with the Main Ethiopian Rift, characterized by dike intrusion and Quaternary volcanism, act now as the locus of extension rather than the rift border faults.

  4. Auxetics in smart systems and structures 2013

    NASA Astrophysics Data System (ADS)

    Scarpa, Fabrizio; Ruzzene, Massimo; Alderson, Andrew; Wojciechowski, Krzysztof W.

    2013-08-01

    Auxetics comes from the Greek (auxetikos), meaning 'that which tends to expand'. The term indicates specifically materials and structures with negative Poisson's ratio (NPR). Although the Poisson's ratio is a mechanical property, auxetic solids have shown evidence of multifunctional characteristics, ranging from increased stiffness and indentation resistance, to energy absorption under static and dynamic loading, soundproofing qualities and dielectric tangent loss. NPR solids and structures have also been used in the past as material platforms to build smart structural systems. Auxetics in general can be considered also a part of the 'negative materials' field, which includes solids and structures exhibiting negative thermal expansion, negative stiffness and compressibility. All these unusual deformation characteristics have the potential to provide a significant contribution to the area of smart materials systems and structures. In this focus issue, we are pleased to present some examples of novel multifunctional behaviors provided by auxetic, negative stiffness and negative compressibility in smart systems and structures. Particular emphasis has been placed upon the multidisciplinary and systems approach provided by auxetics and negative materials, also with examples applied to energy absorption, vibration damping, structural health monitoring and active deployment aspects. Three papers in this focus issue provide significant new clarifications on the role of auxeticity in the mechanical behavior of shear deformation in plates (Lim), stress wave characteristics (Lim again), and thermoelastic damping (Maruszewski et al ). Kochmann and Venturini describe the performance of auxetic composites in finite strain elasticity. New types of microstructures for auxetic systems are depicted for the first time in three works by Ge et al , Zhang et al , and Kim and co-workers. Tubular auxetic structures and their mechanical performance are also analyzed by Karnessis and Burriesci. Foams with negative Poisson's ratio constitute one of the main examples of auxetic materials available. The focus issue presents two papers on this topic, one on a novel microstructure numerical modeling technique (Pozniak et al ), the other on experimental and model identification results of linear and nonlinear vibration behavior (Bianchi and Scarpa). Nonlinearity (now in wave propagation for SHM applications) is also investigated by Klepka and co-workers, this time in auxetic chiral sandwich structures. Vibration damping and nonlinear behavior is also a key feature of the auxetic structural damper with metal rubber particles proposed by Ma et al . Papers on negative material properties are introduced by the negative stiffness and high-frequency damper concept proposed by Kalathur and Lakes. A cellular structure exhibiting a zero Poisson's ratio, together with zero and negative stiffness, is presented in the work of Virk and co-workers. Negative compressibility is examined by Grima et al in truss-type structures with constrained angle stretching. Finally, Grima and co-workers propose a concept of tunable auxetic metamaterial with magnetic inclusions for multifunctional applications. Acknowledgments We would like to thank all the authors for their high quality contributions. Special thanks go also to the Smart Materials and Structures Editorial Board and the IOP Publishing team, with particular mention to Natasha Leeper and Bethan Davies for their continued support in arranging this focus issue in Smart Materials and Structures .

  5. Wavelets, ridgelets, and curvelets for Poisson noise removal.

    PubMed

    Zhang, Bo; Fadili, Jalal M; Starck, Jean-Luc

    2008-07-01

    In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.

  6. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  7. Stochastic and Deterministic Models for the Metastatic Emission Process: Formalisms and Crosslinks.

    PubMed

    Gomez, Christophe; Hartung, Niklas

    2018-01-01

    Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.

  8. Properties of the Bivariate Delayed Poisson Process

    DTIC Science & Technology

    1974-07-01

    and Lewis (1972) in their Berkeley Symposium paper and here their analysis of the bivariate Poisson processes (without Poisson noise) is carried... Poisson processes . They cannot, however, be independent Poisson processes because their events are associated in pairs by the displace- ment centres...process because its marginal processes for events of each type are themselves (univariate) Poisson processes . Cox and Lewis (1972) assumed a

  9. Experimental micro mechanics methods for conventional and negative Poisson's ratio cellular solids as Cosserat continua

    NASA Technical Reports Server (NTRS)

    Lakes, R.

    1991-01-01

    Continuum representations of micromechanical phenomena in structured materials are described, with emphasis on cellular solids. These phenomena are interpreted in light of Cosserat elasticity, a generalized continuum theory which admits degrees of freedom not present in classical elasticity. These are the rotation of points in the material, and a couple per unit area or couple stress. Experimental work in this area is reviewed, and other interpretation schemes are discussed. The applicability of Cosserat elasticity to cellular solids and fibrous composite materials is considered as is the application of related generalized continuum theories. New experimental results are presented for foam materials with negative Poisson's ratios.

  10. Closedness of orbits in a space with SU(2) Poisson structure

    NASA Astrophysics Data System (ADS)

    Fatollahi, Amir H.; Shariati, Ahmad; Khorrami, Mohammad

    2014-06-01

    The closedness of orbits of central forces is addressed in a three-dimensional space in which the Poisson bracket among the coordinates is that of the SU(2) Lie algebra. In particular it is shown that among problems with spherically symmetric potential energies, it is only the Kepler problem for which all bounded orbits are closed. In analogy with the case of the ordinary space, a conserved vector (apart from the angular momentum) is explicitly constructed, which is responsible for the orbits being closed. This is the analog of the Laplace-Runge-Lenz vector. The algebra of the constants of the motion is also worked out.

  11. Electronic hybridisation implications for the damage-tolerance of thin film metallic glasses.

    PubMed

    Schnabel, Volker; Jaya, B Nagamani; Köhler, Mathias; Music, Denis; Kirchlechner, Christoph; Dehm, Gerhard; Raabe, Dierk; Schneider, Jochen M

    2016-11-07

    A paramount challenge in materials science is to design damage-tolerant glasses. Poisson's ratio is commonly used as a criterion to gauge the brittle-ductile transition in glasses. However, our data, as well as results in the literature, are in conflict with the concept of Poisson's ratio serving as a universal parameter for fracture energy. Here, we identify the electronic structure fingerprint associated with damage tolerance in thin film metallic glasses. Our correlative theoretical and experimental data reveal that the fraction of bonds stemming from hybridised states compared to the overall bonding can be associated with damage tolerance in thin film metallic glasses.

  12. Curvature and gravity actions for matrix models: II. The case of general Poisson structures

    NASA Astrophysics Data System (ADS)

    Blaschke, Daniel N.; Steinacker, Harold

    2010-12-01

    We study the geometrical meaning of higher order terms in matrix models of Yang-Mills type in the semi-classical limit, generalizing recent results (Blaschke and Steinacker 2010 Class. Quantum Grav. 27 165010 (arXiv:1003.4132)) to the case of four-dimensional spacetime geometries with general Poisson structure. Such terms are expected to arise e.g. upon quantization of the IKKT-type models. We identify terms which depend only on the intrinsic geometry and curvature, including modified versions of the Einstein-Hilbert action as well as terms which depend on the extrinsic curvature. Furthermore, a mechanism is found which implies that the effective metric G on the spacetime brane {\\cal M}\\subset \\mathds{R}^D 'almost' coincides with the induced metric g. Deviations from G = g are suppressed, and characterized by the would-be U(1) gauge field.

  13. Particle motion around magnetized black holes: Preston-Poisson space-time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konoplya, R. A.

    We analyze the motion of massless and massive particles around black holes immersed in an asymptotically uniform magnetic field and surrounded by some mechanical structure, which provides the magnetic field. The space-time is described by the Preston-Poisson metric, which is the generalization of the well-known Ernst metric with a new parameter, tidal force, characterizing the surrounding structure. The Hamilton-Jacobi equations allow the separation of variables in the equatorial plane. The presence of a tidal force from the surroundings considerably changes the parameters of the test particle motion: it increases the radius of circular orbits of particles and increases the bindingmore » energy of massive particles going from a given circular orbit to the innermost stable orbit near the black hole. In addition, it increases the distance of the minimal approach, time delay, and bending angle for a ray of light propagating near the black hole.« less

  14. A time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes with applications in substance abuse research.

    PubMed

    Yang, Songshan; Cranford, James A; Jester, Jennifer M; Li, Runze; Zucker, Robert A; Buu, Anne

    2017-02-28

    This study proposes a time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes. The motivating example demonstrates that this zero-inflated Poisson model allows investigators to study group differences in different aspects of substance use (e.g., the probability of abstinence and the quantity of alcohol use) simultaneously. The simulation study shows that the accuracy of estimation of trajectory functions improves as the sample size increases; the accuracy under equal group sizes is only higher when the sample size is small (100). In terms of the performance of the hypothesis testing, the type I error rates are close to their corresponding significance levels under all settings. Furthermore, the power increases as the alternative hypothesis deviates more from the null hypothesis, and the rate of this increasing trend is higher when the sample size is larger. Moreover, the hypothesis test for the group difference in the zero component tends to be less powerful than the test for the group difference in the Poisson component. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Reduction of the discretization stencil of direct forcing immersed boundary methods on rectangular cells: The ghost node shifting method

    NASA Astrophysics Data System (ADS)

    Picot, Joris; Glockner, Stéphane

    2018-07-01

    We present an analytical study of discretization stencils for the Poisson problem and the incompressible Navier-Stokes problem when used with some direct forcing immersed boundary methods. This study uses, but is not limited to, second-order discretization and Ghost-Cell Finite-Difference methods. We show that the stencil size increases with the aspect ratio of rectangular cells, which is undesirable as it breaks assumptions of some linear system solvers. To circumvent this drawback, a modification of the Ghost-Cell Finite-Difference methods is proposed to reduce the size of the discretization stencil to the one observed for square cells, i.e. with an aspect ratio equal to one. Numerical results validate this proposed method in terms of accuracy and convergence, for the Poisson problem and both Dirichlet and Neumann boundary conditions. An improvement on error levels is also observed. In addition, we show that the application of the chosen Ghost-Cell Finite-Difference methods to the Navier-Stokes problem, discretized by a pressure-correction method, requires an additional interpolation step. This extra step is implemented and validated through well known test cases of the Navier-Stokes equations.

  16. Notes on testing equality and interval estimation in Poisson frequency data under a three-treatment three-period crossover trial.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2016-10-01

    When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.

  17. Subaru HDS transmission spectroscopy of the transiting extrasolar planet HD209458b

    NASA Astrophysics Data System (ADS)

    Narita, N.; Suto, Y.; Winn, J. N.; Turner, E. L.; Aoki, W.; Leigh, C. J.; Sato, B.; Tamura, M.; Yamada, T.

    2006-02-01

    We have searched for absorption in several common atomic species due to the atmosphere or exosphere of the transiting extrasolar planet HD 209458b, using high precision optical spectra obtained with the Subaru High Dispersion Spectrograph (HDS). Previously we reported an upper limit on Hα absorption of 0.1% (3σ) within a 5.1Å band. Using the same procedure, we now report upper limits on absorption due to the optical transitions of Na D, Li, Hα, Hβ, Hγ, Fe, and Ca. The 3σ upper limit for each transition is approximately 1% within a 0.3Å band (the core of the line), and a few tenths of a per cent within a 2Å band (the full line width). The wide-band results are close to the expected limit due to photon-counting (Poisson) statistics, although in the narrow-band case we have encountered unexplained systematic errors at a few times the Poisson level. These results are consistent with all previously reported detections (Charbonneau et al. 2002, ApJ, 568, 377) and upper limits (Bundy & Marcy 2000, PASP, 112, 1421; Moutou et al. 2001, A&A, 371, 260), but are significantly more sensitive yet achieved from ground based observations.

  18. Influence of an independent quarterly audit on publicly reported vancomycin-resistant enterocococi bacteremia data in Ontario, Canada.

    PubMed

    Prematunge, Chatura; Policarpio, Michelle E; Johnstone, Jennie; Adomako, Kwaku; Nadolny, Emily; Lam, Freda; Li, Ye; Brown, Kevin A; Garber, Gary

    2018-04-13

    All Ontario hospitals are mandated to self-report vancomycin-resistant enterocococi (VRE) bacteremias to Ontario's Ministry of Health and Long-term Care for public reporting purposes. Independent quarterly audits of publicly reported VRE bacteremias between September 2013 and June 2015 were carried out by Public Health Ontario. VRE bacteremia case-reporting errors between January 2009 and August 2013 were identified by a single retrospective audit. Employing a quasiexperimental pre-post study design, the relative risk of VRE bacteremia reporting errors before and after quarterly audits were modeled using Poisson regression adjusting for hospital type, case counts reported to the Ministry of Health and Long-term Care, and autocorrelation via generalized estimating equation. Overall, 24.5% (126 out of 514) of VRE bacteremias were reported in error; 114 out of 367 (31%) VRE bacteremias reported before quarterly audits and 12 out of 147 (8.1%) reported after audits were found to be incorrect. In adjusted analysis, quarterly audits of VRE bacteremias were associated with significant reductions in reporting errors when compared with before quarterly auditing (relative risk, 0.17; 95% confidence interval, 0.05-0.63). Risk of reporting errors among community hospitals were greater than acute teaching hospitals of the region (relative risk, 4.39; 95% CI, 3.07-5.70). This study found independent quarterly audits of publicly reported VRE bacteremias to be associated with significant reductions in reporting errors. Public reporting systems should consider adopting routine data audits and hospital-targeted training to improve data accuracy. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.

  19. Fringe Capacitance Correction for a Coaxial Soil Cell

    PubMed Central

    Pelletier, Mathew G.; Viera, Joseph A.; Schwartz, Robert C.; Lascano, Robert J.; Evett, Steven R.; Green, Tim R.; Wanjura, John D.; Holt, Greg A.

    2011-01-01

    Accurate measurement of moisture content is a prime requirement in hydrological, geophysical and biogeochemical research as well as for material characterization and process control. Within these areas, accurate measurements of the surface area and bound water content is becoming increasingly important for providing answers to many fundamental questions ranging from characterization of cotton fiber maturity, to accurate characterization of soil water content in soil water conservation research to bio-plant water utilization to chemical reactions and diffusions of ionic species across membranes in cells as well as in the dense suspensions that occur in surface films. One promising technique to address the increasing demands for higher accuracy water content measurements is utilization of electrical permittivity characterization of materials. This technique has enjoyed a strong following in the soil-science and geological community through measurements of apparent permittivity via time-domain-reflectometry (TDR) as well in many process control applications. Recent research however, is indicating a need to increase the accuracy beyond that available from traditional TDR. The most logical pathway then becomes a transition from TDR based measurements to network analyzer measurements of absolute permittivity that will remove the adverse effects that high surface area soils and conductivity impart onto the measurements of apparent permittivity in traditional TDR applications. This research examines an observed experimental error for the coaxial probe, from which the modern TDR probe originated, which is hypothesized to be due to fringe capacitance. The research provides an experimental and theoretical basis for the cause of the error and provides a technique by which to correct the system to remove this source of error. To test this theory, a Poisson model of a coaxial cell was formulated to calculate the effective theoretical extra length caused by the fringe capacitance which is then used to correct the experimental results such that experimental measurements utilizing differing coaxial cell diameters and probe lengths, upon correction with the Poisson model derived correction factor, all produce the same results thereby lending support and for an augmented measurement technique for measurement of absolute permittivity. PMID:22346601

  20. Dynamically accumulated dose and 4D accumulated dose for moving tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Heng; Li Yupeng; Zhang Xiaodong

    2012-12-15

    Purpose: The purpose of this work was to investigate the relationship between dynamically accumulated dose (dynamic dose) and 4D accumulated dose (4D dose) for irradiation of moving tumors, and to quantify the dose uncertainty induced by tumor motion. Methods: The authors established that regardless of treatment modality and delivery properties, the dynamic dose will converge to the 4D dose, instead of the 3D static dose, after multiple deliveries. The bounds of dynamic dose, or the maximum estimation error using 4D or static dose, were established for the 4D and static doses, respectively. Numerical simulations were performed (1) to prove themore » principle that for each phase, after multiple deliveries, the average number of deliveries for any given time converges to the total number of fractions (K) over the number of phases (N); (2) to investigate the dose difference between the 4D and dynamic doses as a function of the number of deliveries for deliveries of a 'pulsed beam'; and (3) to investigate the dose difference between 4D dose and dynamic doses as a function of delivery time for deliveries of a 'continuous beam.' A Poisson model was developed to estimate the mean dose error as a function of number of deliveries or delivered time for both pulsed beam and continuous beam. Results: The numerical simulations confirmed that the number of deliveries for each phase converges to K/N, assuming a random starting phase. Simulations for the pulsed beam and continuous beam also suggested that the dose error is a strong function of the number of deliveries and/or total deliver time and could be a function of the breathing cycle, depending on the mode of delivery. The Poisson model agrees well with the simulation. Conclusions: Dynamically accumulated dose will converge to the 4D accumulated dose after multiple deliveries, regardless of treatment modality. Bounds of the dynamic dose could be determined using quantities derived from 4D doses, and the mean dose difference between the dynamic dose and 4D dose as a function of number of deliveries and/or total deliver time was also established.« less

  1. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.

  2. In situ MEMS testing: correlation of high-resolution X-ray diffraction with mechanical experiments and finite element analysis.

    PubMed

    Schifferle, Andreas; Dommann, Alex; Neels, Antonia

    2017-01-01

    New methods are needed in microsystems technology for evaluating microelectromechanical systems (MEMS) because of their reduced size. The assessment and characterization of mechanical and structural relations of MEMS are essential to assure the long-term functioning of devices, and have a significant impact on design and fabrication. Within this study a concept for the investigation of mechanically loaded MEMS materials on an atomic level is introduced, combining high-resolution X-ray diffraction (HRXRD) measurements with finite element analysis (FEA) and mechanical testing. In situ HRXRD measurements were performed on tensile loaded single crystal silicon (SCSi) specimens by means of profile scans and reciprocal space mapping (RSM) on symmetrical (004) and (440) reflections. A comprehensive evaluation of the rather complex XRD patterns and features was enabled by the correlation of measured with simulated, 'theoretical' patterns. Latter were calculated by a specifically developed, simple and fast approach on the basis of continuum mechanical relations. Qualitative and quantitative analysis confirmed the admissibility and accuracy of the presented method. In this context [001] Poisson's ratio was determined providing an error of less than 1.5% with respect to analytical prediction. Consequently, the introduced procedure contributes to further going investigations of weak scattering being related to strain and defects in crystalline structures and therefore supports investigations on materials and devices failure mechanisms.

  3. Timing performance of phase-locked loops in optical pulse position modulation communication systems

    NASA Astrophysics Data System (ADS)

    Lafaw, D. A.

    In an optical digital communication system, an accurate clock signal must be available at the receiver to provide proper synchronization with the transmitted signal. Phase synchronization is especially critical in M-ary pulse position modulation (PPM) systems where the optimum decision scheme is an energy detector which compares the energy in each of M time slots to decide which of M possible words was sent. A timing error causes energy spillover into adjacent time slots (a form of intersymbol interference) so that only a portion of the signal energy may be attributed to the correct time slot. This effect decreases the effective signal, increases the effective noise, and increases the probability of error. This report simulates a timing subsystem for a satellite-to-satellite optical PPM communication link. The receiver employs direct photodetection, preprocessing of the optical signal, and a phase-locked loop for timing synchronization. The photodetector output is modeled as a filtered, doubly stochastic Poisson shot noise process. The variance of the relative phase error is examined under varying signal strength conditions as an indication of loop performance, and simulation results are compared to theoretical relations.

  4. Absolute binding free energies between T4 lysozyme and 141 small molecules: calculations based on multiple rigid receptor configurations

    PubMed Central

    Xie, Bing; Nguyen, Trung Hai; Minh, David D. L.

    2017-01-01

    We demonstrate the feasibility of estimating protein-ligand binding free energies using multiple rigid receptor configurations. Based on T4 lysozyme snapshots extracted from six alchemical binding free energy calculations with a flexible receptor, binding free energies were estimated for a total of 141 ligands. For 24 ligands, the calculations reproduced flexible-receptor estimates with a correlation coefficient of 0.90 and a root mean square error of 1.59 kcal/mol. The accuracy of calculations based on Poisson-Boltzmann/Surface Area implicit solvent was comparable to previously reported free energy calculations. PMID:28430432

  5. Fundamentals of Free-Space Optical Communications

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Moision, Bruce; Erkmen, Baris

    2012-01-01

    Free-space optical communication systems potentially gain many dBs over RF systems. There is no upper limit on the theoretically achievable photon efficiency when the system is quantum-noise-limited: a) Intensity modulations plus photon counting can achieve arbitrarily high photon efficiency, but with sub-optimal spectral efficiency. b) Quantum-ideal number states can achieve the ultimate capacity in the limit of perfect transmissivity. Appropriate error correction codes are needed to communicate reliably near the capacity limits. Poisson-modeled noises, detector losses, and atmospheric effects must all be accounted for: a) Theoretical models are used to analyze performance degradations. b) Mitigation strategies derived from this analysis are applied to minimize these degradations.

  6. Approximating SIR-B response characteristics and estimating wave height and wavelength for ocean imagery

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1987-01-01

    NASA Space Shuttle Challenger SIR-B ocean scenes are used to derive directional wave spectra for which speckle noise is modeled as a function of Rayleigh random phase coherence downrange and Poisson random amplitude errors inherent in the Doppler measurement of along-track position. A Fourier filter that preserves SIR-B image phase relations is used to correct the stationary and dynamic response characteristics of the remote sensor and scene correlator, as well as to subtract an estimate of the speckle noise component. A two-dimensional map of sea surface elevation is obtained after the filtered image is corrected for both random and deterministic motions.

  7. Heavy Ion Irradiation Fluence Dependence for Single-Event Upsets of NAND Flash Memory

    NASA Technical Reports Server (NTRS)

    Chen, Dakai; Wilcox, Edward; Ladbury, Raymond; Kim, Hak; Phan, Anthony; Seidleck, Christina; LaBel, Kenneth

    2016-01-01

    We investigated the single-event effect (SEE) susceptibility of the Micron 16 nm NAND flash, and found the single-event upset (SEU) cross section varied inversely with fluence. The SEU cross section decreased with increasing fluence. We attribute the effect to the variable upset sensitivities of the memory cells. The current test standards and procedures assume that SEU follow a Poisson process and do not take into account the variability in the error rate with fluence. Therefore, heavy ion irradiation of devices with variable upset sensitivity distribution using typical fluence levels may underestimate the cross section and on-orbit event rate.

  8. CCM Continuity Constraint Method: A finite-element computational fluid dynamics algorithm for incompressible Navier-Stokes fluid flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, P. T.

    1993-09-01

    As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Provingmore » this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H 1 Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.« less

  9. Fractional Poisson Fields and Martingales

    NASA Astrophysics Data System (ADS)

    Aletti, Giacomo; Leonenko, Nikolai; Merzbach, Ely

    2018-02-01

    We present new properties for the Fractional Poisson process (FPP) and the Fractional Poisson field on the plane. A martingale characterization for FPPs is given. We extend this result to Fractional Poisson fields, obtaining some other characterizations. The fractional differential equations are studied. We consider a more general Mixed-Fractional Poisson process and show that this process is the stochastic solution of a system of fractional differential-difference equations. Finally, we give some simulations of the Fractional Poisson field on the plane.

  10. Statistical characteristics of climbing fiber spikes necessary for efficient cerebellar learning.

    PubMed

    Kuroda, S; Yamamoto, K; Miyamoto, H; Doya, K; Kawat, M

    2001-03-01

    Mean firing rates (MFRs), with analogue values, have thus far been used as information carriers of neurons in most brain theories of learning. However, the neurons transmit the signal by spikes, which are discrete events. The climbing fibers (CFs), which are known to be essential for cerebellar motor learning, fire at the ultra-low firing rates (around 1 Hz), and it is not yet understood theoretically how high-frequency information can be conveyed and how learning of smooth and fast movements can be achieved. Here we address whether cerebellar learning can be achieved by CF spikes instead of conventional MFR in an eye movement task, such as the ocular following response (OFR), and an arm movement task. There are two major afferents into cerebellar Purkinje cells: parallel fiber (PF) and CF, and the synaptic weights between PFs and Purkinje cells have been shown to be modulated by the stimulation of both types of fiber. The modulation of the synaptic weights is regulated by the cerebellar synaptic plasticity. In this study we simulated cerebellar learning using CF signals as spikes instead of conventional MFR. To generate the spikes we used the following four spike generation models: (1) a Poisson model in which the spike interval probability follows a Poisson distribution, (2) a gamma model in which the spike interval probability follows the gamma distribution, (3) a max model in which a spike is generated when a synaptic input reaches maximum, and (4) a threshold model in which a spike is generated when the input crosses a certain small threshold. We found that, in an OFR task with a constant visual velocity, learning was successful with stochastic models, such as Poisson and gamma models, but not in the deterministic models, such as max and threshold models. In an OFR with a stepwise velocity change and an arm movement task, learning could be achieved only in the Poisson model. In addition, for efficient cerebellar learning, the distribution of CF spike-occurrence time after stimulus onset must capture at least the first, second and third moments of the temporal distribution of error signals.

  11. Acceptance sampling for attributes via hypothesis testing and the hypergeometric distribution

    NASA Astrophysics Data System (ADS)

    Samohyl, Robert Wayne

    2017-10-01

    This paper questions some aspects of attribute acceptance sampling in light of the original concepts of hypothesis testing from Neyman and Pearson (NP). Attribute acceptance sampling in industry, as developed by Dodge and Romig (DR), generally follows the international standards of ISO 2859, and similarly the Brazilian standards NBR 5425 to NBR 5427 and the United States Standards ANSI/ASQC Z1.4. The paper evaluates and extends the area of acceptance sampling in two directions. First, by suggesting the use of the hypergeometric distribution to calculate the parameters of sampling plans avoiding the unnecessary use of approximations such as the binomial or Poisson distributions. We show that, under usual conditions, discrepancies can be large. The conclusion is that the hypergeometric distribution, ubiquitously available in commonly used software, is more appropriate than other distributions for acceptance sampling. Second, and more importantly, we elaborate the theory of acceptance sampling in terms of hypothesis testing rigorously following the original concepts of NP. By offering a common theoretical structure, hypothesis testing from NP can produce a better understanding of applications even beyond the usual areas of industry and commerce such as public health and political polling. With the new procedures, both sample size and sample error can be reduced. What is unclear in traditional acceptance sampling is the necessity of linking the acceptable quality limit (AQL) exclusively to the producer and the lot quality percent defective (LTPD) exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD. Furthermore, we can also question why type I error is always uniquely associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature. The article presents R code throughout.

  12. Muscle Activity Map Reconstruction from High Density Surface EMG Signals With Missing Channels Using Image Inpainting and Surface Reconstruction Methods.

    PubMed

    Ghaderi, Parviz; Marateb, Hamid R

    2017-07-01

    The aim of this study was to reconstruct low-quality High-density surface EMG (HDsEMG) signals, recorded with 2-D electrode arrays, using image inpainting and surface reconstruction methods. It is common that some fraction of the electrodes may provide low-quality signals. We used variety of image inpainting methods, based on partial differential equations (PDEs), and surface reconstruction methods to reconstruct the time-averaged or instantaneous muscle activity maps of those outlier channels. Two novel reconstruction algorithms were also proposed. HDsEMG signals were recorded from the biceps femoris and brachial biceps muscles during low-to-moderate-level isometric contractions, and some of the channels (5-25%) were randomly marked as outliers. The root-mean-square error (RMSE) between the original and reconstructed maps was then calculated. Overall, the proposed Poisson and wave PDE outperformed the other methods (average RMSE 8.7 μV rms ± 6.1 μV rms and 7.5 μV rms ± 5.9 μV rms ) for the time-averaged single-differential and monopolar map reconstruction, respectively. Biharmonic Spline, the discrete cosine transform, and the Poisson PDE outperformed the other methods for the instantaneous map reconstruction. The running time of the proposed Poisson and wave PDE methods, implemented using a Vectorization package, was 4.6 ± 5.7 ms and 0.6 ± 0.5 ms, respectively, for each signal epoch or time sample in each channel. The proposed reconstruction algorithms could be promising new tools for reconstructing muscle activity maps in real-time applications. Proper reconstruction methods could recover the information of low-quality recorded channels in HDsEMG signals.

  13. Simulation methods with extended stability for stiff biochemical Kinetics.

    PubMed

    Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin

    2010-08-11

    With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.

  14. A Matlab-based finite-difference solver for the Poisson problem with mixed Dirichlet-Neumann boundary conditions

    NASA Astrophysics Data System (ADS)

    Reimer, Ashton S.; Cheviakov, Alexei F.

    2013-03-01

    A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.

  15. On the Singularity of the Vlasov-Poisson System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Hong Qin, Jian Zheng

    2013-04-26

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker- Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency v approaches zero. However, we show that the colllisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the approaching zero from the positive side.

  16. On the singularity of the Vlasov-Poisson system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Jian; Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08550

    2013-09-15

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker-Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency ν approaches zero. However, we show that the collisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the ν approaches zero from the positive side.

  17. Detection of Answer Copying Based on the Structure of a High-Stakes Test

    ERIC Educational Resources Information Center

    Belov, Dmitry I.

    2011-01-01

    This article presents the Variable Match Index (VM-Index), a new statistic for detecting answer copying. The power of the VM-Index relies on two-dimensional conditioning as well as the structure of the test. The asymptotic distribution of the VM-Index is analyzed by reduction to Poisson trials. A computational study comparing the VM-Index with the…

  18. Multiscale tomography of buried magnetic structures: its use in the localization and characterization of archaeological structures

    NASA Astrophysics Data System (ADS)

    Saracco, Ginette; Moreau, Frédérique; Mathé, Pierre-Etienne; Hermitte, Daniel; Michel, Jean-Marie

    2007-10-01

    We have previously developed a method for characterizing and localizing `homogeneous' buried sources, from the measure of potential anomalies at a fixed height above ground (magnetic, electric and gravity). This method is based on potential theory and uses the properties of the Poisson kernel (real by definition) and the continuous wavelet theory. Here, we relax the assumption on sources and introduce a method that we call the `multiscale tomography'. Our approach is based on the harmonic extension of the observed magnetic field to produce a complex source by use of a complex Poisson kernel solution of the Laplace equation for complex potential field. A phase and modulus are defined. We show that the phase provides additional information on the total magnetic inclination and the structure of sources, while the modulus allows us to characterize its spatial location, depth and `effective degree'. This method is compared to the `complex dipolar tomography', extension of the Patella method that we previously developed. We applied both methods and a classical electrical resistivity tomography to detect and localize buried archaeological structures like antique ovens from magnetic measurements on the Fox-Amphoux site (France). The estimates are then compared with the results of excavations.

  19. Effect of solid distribution on elastic properties of open-cell cellular solids using numerical and experimental methods.

    PubMed

    Zargarian, A; Esfahanian, M; Kadkhodapour, J; Ziaei-Rad, S

    2014-09-01

    Effect of solid distribution between edges and vertices of three-dimensional cellular solid with an open-cell structure was investigated both numerically and experimentally. Finite element analysis (FEA) with continuum elements and appropriate periodic boundary condition was employed to calculate the elastic properties of cellular solids using tetrakaidecahedral (Kelvin) unit cell. Relative densities between 0.01 and 0.1 and various values of solid fractions were considered. In order to validate the numerical model, three scaffolds with the relative density of 0.08, but different amounts of solid in vertices, were fabricated via 3-D printing technique. Good agreement was observed between numerical simulation and experimental results. Results of numerical simulation showed that, at low relative densities (<0.03), Young׳s modulus increased by shifting materials away from edges to vertices at first and then decreased after reaching a critical point. However, for the high values of relative density, Young׳s modulus increased monotonically. Mechanisms of such a behavior were discussed in detail. Results also indicated that Poisson׳s ratio decreased by increasing relative density and solid fraction in vertices. By fitting a curve to the data obtained from the numerical simulation and considering the relative density and solid fraction in vertices, empirical relations were derived for Young׳s modulus and Poisson׳s ratio. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less

  1. Extracting real-crack properties from non-linear elastic behaviour of rocks: abundance of cracks with dominating normal compliance and rocks with negative Poisson ratios

    NASA Astrophysics Data System (ADS)

    Zaitsev, Vladimir Y.; Radostin, Andrey V.; Pasternak, Elena; Dyskin, Arcady

    2017-09-01

    Results of examination of experimental data on non-linear elasticity of rocks using experimentally determined pressure dependences of P- and S-wave velocities from various literature sources are presented. Overall, over 90 rock samples are considered. Interpretation of the data is performed using an effective-medium description in which cracks are considered as compliant defects with explicitly introduced shear and normal compliances without specifying a particular crack model with an a priori given ratio of the compliances. Comparison with the experimental data indicated abundance (˜ 80 %) of cracks with the normal-to-shear compliance ratios that significantly exceed the values typical of conventionally used crack models (such as penny-shaped cuts or thin ellipsoidal cracks). Correspondingly, rocks with such cracks demonstrate a strongly decreased Poisson ratio including a significant (˜ 45 %) portion of rocks exhibiting negative Poisson ratios at lower pressures, for which the concentration of not yet closed cracks is maximal. The obtained results indicate the necessity for further development of crack models to account for the revealed numerous examples of cracks with strong domination of normal compliance. Discovering such a significant number of naturally auxetic rocks is in contrast to the conventional viewpoint that occurrence of a negative Poisson ratio is an exotic fact that is mostly discussed for artificial structures.

  2. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    DOE PAGES

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2015-11-19

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less

  3. Noncommutative gerbes and deformation quantization

    NASA Astrophysics Data System (ADS)

    Aschieri, Paolo; Baković, Igor; Jurčo, Branislav; Schupp, Peter

    2010-11-01

    We define noncommutative gerbes using the language of star products. Quantized twisted Poisson structures are discussed as an explicit realization in the sense of deformation quantization. Our motivation is the noncommutative description of D-branes in the presence of topologically non-trivial background fields.

  4. An efficient three-dimensional Poisson solver for SIMD high-performance-computing architectures

    NASA Technical Reports Server (NTRS)

    Cohl, H.

    1994-01-01

    We present an algorithm that solves the three-dimensional Poisson equation on a cylindrical grid. The technique uses a finite-difference scheme with operator splitting. This splitting maps the banded structure of the operator matrix into a two-dimensional set of tridiagonal matrices, which are then solved in parallel. Our algorithm couples FFT techniques with the well-known ADI (Alternating Direction Implicit) method for solving Elliptic PDE's, and the implementation is extremely well suited for a massively parallel environment like the SIMD architecture of the MasPar MP-1. Due to the highly recursive nature of our problem, we believe that our method is highly efficient, as it avoids excessive interprocessor communication.

  5. A silicon avalanche photodiode detector circuit for Nd:YAG laser scattering

    NASA Astrophysics Data System (ADS)

    Hsieh, C.-L.; Haskovec, J.; Carlstrom, T. N.; Deboo, J. C.; Greenfield, C. M.; Snider, R. T.; Trost, P.

    1990-06-01

    A silicon avalanche photodiode with an internal gain of about 50 to 100 is used in a temperature controlled environment to measure the Nd:YAG laser Thomson scattered spectrum in the wavelength range from 700 to 1150 nm. A charge sensitive preamplifier was developed for minimizing the noise contribution from the detector electronics. Signal levels as low as 20 photoelectrons (S/N = 1) can be detected. Measurements show that both the signal and the variance of the signal vary linearly with the input light level over the range of interest, indicating Poisson statistics. The signal is processed using a 100 ns delay line and a differential amplifier which subtracts the low frequency background light component. The background signal is amplified with a computer controlled variable gain amplifier and is used for an estimate of the measurement error, calibration, and Z sub eff measurements of the plasma. The signal processing was analyzed using a theoretical model to aid the system design and establish the procedure for data error analysis.

  6. Silicon avalanche photodiode detector circuit for Nd:YAG laser scattering

    NASA Astrophysics Data System (ADS)

    Hsieh, C. L.; Haskovec, J.; Carlstrom, T. N.; DeBoo, J. C.; Greenfield, C. M.; Snider, R. T.; Trost, P.

    1990-10-01

    A silicon avalanche photodiode with an internal gain of about 50 to 100 is used in a temperature-controlled environment to measure the Nd:YAG laser Thomson scattered spectrum in the wavelength range from 700 to 1150 nm. A charge-sensitive preamplifier has been developed for minimizing the noise contribution from the detector electronics. Signal levels as low as 20 photoelectrons (S/N=1) can be detected. Measurements show that both the signal and the variance of the signal vary linearly with the input light level over the range of interest, indicating Poisson statistics. The signal is processed using a 100 ns delay line and a differential amplifier which subtracts the low-frequency background light component. The background signal is amplified with a computer-controlled variable gain amplifier and is used for an estimate of the measurement error, calibration, and Zeff measurements of the plasma. The signal processing has been analyzed using a theoretical model to aid the system design and establish the procedure for data error analysis.

  7. Improved detection of radioactive material using a series of measurements

    NASA Astrophysics Data System (ADS)

    Mann, Jenelle

    The goal of this project is to develop improved algorithms for detection of radioactive sources that have low signal compared to background. The detection of low signal sources is of interest in national security applications where the source may have weak ionizing radiation emissions, is heavily shielded, or the counting time is short (such as portal monitoring). Traditionally to distinguish signal from background the decision threshold (y*) is calculated by taking a long background count and limiting the false negative error (alpha error) to 5%. Some problems with this method include: background is constantly changing due to natural environmental fluctuations and large amounts of data are being taken as the detector continuously scans that are not utilized. Rather than looking at a single measurement, this work investigates looking at a series of N measurements and develops an appropriate decision threshold for exceeding the decision threshold n times in a series of N. This methodology is investigated for a rectangular, triangular, sinusoidal, Poisson, and Gaussian distribution.

  8. Computational Analysis of Effect of Transient Fluid Force on Composite Structures

    DTIC Science & Technology

    2013-12-01

    as they well represent an E-glass fiber reinforced composite frequently used in research and industrial applications. The fluid domain was sized...provide unique perspectives on peak stress ratios . The two models both share increased structural rigidity. The cylinder is reinforced by... Poisson ratio of 0.3 and Young’s modulus of 20 GPa were added to the transient structural engineering data cell (Figure 69). 78 Figure 69. E-Glass

  9. An Auxetic structure configured as oesophageal stent with potential to be used for palliative treatment of oesophageal cancer; development and in vitro mechanical analysis.

    PubMed

    Ali, Murtaza N; Rehman, Ihtesham Ur

    2011-11-01

    Oesophageal cancer is the ninth leading cause of malignant cancer death and its prognosis remains poor. Dysphagia which is an inability to swallow is a presenting symptom of oesophageal cancer and is indicative of incurability. The goal of this study was to design and manufacture an Auxetic structure film and to configure this film as an Auxetic stent for the palliative treatment of oesophageal cancer, and for the prevention of dysphagia. Polypropylene was used as a material for its flexibility and non-toxicity. The Auxetic (rotating-square geometry) structure was made by laser cutting the polypropylene film. This flat structure was welded together to form a tubular form (stent), by an adjustable temperature control soldering iron station: following this, an annealing process was also carried out to ease any material stresses. Poisson's ratio was estimated and elastic and plastic deformation of the Auxetic structure was evaluated. The elastic and plastic deformation behaviours of the Auxetic polypropylene film were evaluated by applying repetitive uniaxial tensile loads. Observation of the structure showed that it was initially elastically deformed, thereafter plastic deformation occurred. This research discusses a novel way of fabricating an Auxetic structure (rotating-squares connected together through hinges) on Polypropylene films, by estimating the Poisson's ratio and evaluating the plastic deformation relevant to the expansion behaviour of an Auxetic stent within the oesophageal lumen.

  10. On the fractal characterization of Paretian Poisson processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo I.; Sokolov, Igor M.

    2012-06-01

    Paretian Poisson processes are Poisson processes which are defined on the positive half-line, have maximal points, and are quantified by power-law intensities. Paretian Poisson processes are elemental in statistical physics, and are the bedrock of a host of power-law statistics ranging from Pareto's law to anomalous diffusion. In this paper we establish evenness-based fractal characterizations of Paretian Poisson processes. Considering an array of socioeconomic evenness-based measures of statistical heterogeneity, we show that: amongst the realm of Poisson processes which are defined on the positive half-line, and have maximal points, Paretian Poisson processes are the unique class of 'fractal processes' exhibiting scale-invariance. The results established in this paper are diametric to previous results asserting that the scale-invariance of Poisson processes-with respect to physical randomness-based measures of statistical heterogeneity-is characterized by exponential Poissonian intensities.

  11. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  12. Clustered mixed nonhomogeneous Poisson process spline models for the analysis of recurrent event panel data.

    PubMed

    Nielsen, J D; Dean, C B

    2008-09-01

    A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.

  13. A comparative study of a theoretical neural net model with MEG data from epileptic patients and normal individuals.

    PubMed

    Kotini, A; Anninos, P; Anastasiadis, A N; Tamiolakis, D

    2005-09-07

    The aim of this study was to compare a theoretical neural net model with MEG data from epileptic patients and normal individuals. Our experimental study population included 10 epilepsy sufferers and 10 healthy subjects. The recordings were obtained with a one-channel biomagnetometer SQUID in a magnetically shielded room. Using the method of x2-fitting it was found that the MEG amplitudes in epileptic patients and normal subjects had Poisson and Gauss distributions respectively. The Poisson connectivity derived from the theoretical neural model represents the state of epilepsy, whereas the Gauss connectivity represents normal behavior. The MEG data obtained from epileptic areas had higher amplitudes than the MEG from normal regions and were comparable with the theoretical magnetic fields from Poisson and Gauss distributions. Furthermore, the magnetic field derived from the theoretical model had amplitudes in the same order as the recorded MEG from the 20 participants. The approximation of the theoretical neural net model with real MEG data provides information about the structure of the brain function in epileptic and normal states encouraging further studies to be conducted.

  14. Noncommutative Line Bundles and Gerbes

    NASA Astrophysics Data System (ADS)

    Jurčo, B.

    We introduce noncommutative line bundles and gerbes within the framework of deformation quantization. The Seiberg-Witten map is used to construct the corresponding noncommutative Čech cocycles. Morita equivalence of star products and quantization of twisted Poisson structures are discussed from this point of view.

  15. A Three-dimensional Polymer Scaffolding Material Exhibiting a Zero Poisson's Ratio.

    PubMed

    Soman, Pranav; Fozdar, David Y; Lee, Jin Woo; Phadke, Ameya; Varghese, Shyni; Chen, Shaochen

    2012-05-14

    Poisson's ratio describes the degree to which a material contracts (expands) transversally when axially strained. A material with a zero Poisson's ratio does not transversally deform in response to an axial strain (stretching). In tissue engineering applications, scaffolding having a zero Poisson's ratio (ZPR) may be more suitable for emulating the behavior of native tissues and accommodating and transmitting forces to the host tissue site during wound healing (or tissue regrowth). For example, scaffolding with a zero Poisson's ratio may be beneficial in the engineering of cartilage, ligament, corneal, and brain tissues, which are known to possess Poisson's ratios of nearly zero. Here, we report a 3D biomaterial constructed from polyethylene glycol (PEG) exhibiting in-plane Poisson's ratios of zero for large values of axial strain. We use digital micro-mirror device projection printing (DMD-PP) to create single- and double-layer scaffolds composed of semi re-entrant pores whose arrangement and deformation mechanisms contribute the zero Poisson's ratio. Strain experiments prove the zero Poisson's behavior of the scaffolds and that the addition of layers does not change the Poisson's ratio. Human mesenchymal stem cells (hMSCs) cultured on biomaterials with zero Poisson's ratio demonstrate the feasibility of utilizing these novel materials for biological applications which require little to no transverse deformations resulting from axial strains. Techniques used in this work allow Poisson's ratio to be both scale-independent and independent of the choice of strut material for strains in the elastic regime, and therefore ZPR behavior can be imparted to a variety of photocurable biomaterial.

  16. From Loss of Memory to Poisson.

    ERIC Educational Resources Information Center

    Johnson, Bruce R.

    1983-01-01

    A way of presenting the Poisson process and deriving the Poisson distribution for upper-division courses in probability or mathematical statistics is presented. The main feature of the approach lies in the formulation of Poisson postulates with immediate intuitive appeal. (MNS)

  17. Pumped shot noise in adiabatically modulated graphene-based double-barrier structures.

    PubMed

    Zhu, Rui; Lai, Maoli

    2011-11-16

    Quantum pumping processes are accompanied by considerable quantum noise. Based on the scattering approach, we investigated the pumped shot noise properties in adiabatically modulated graphene-based double-barrier structures. It is found that compared with the Poisson processes, the pumped shot noise is dramatically enhanced where the dc pumped current changes flow direction, which demonstrates the effect of the Klein paradox.

  18. Pumped shot noise in adiabatically modulated graphene-based double-barrier structures

    NASA Astrophysics Data System (ADS)

    Zhu, Rui; Lai, Maoli

    2011-11-01

    Quantum pumping processes are accompanied by considerable quantum noise. Based on the scattering approach, we investigated the pumped shot noise properties in adiabatically modulated graphene-based double-barrier structures. It is found that compared with the Poisson processes, the pumped shot noise is dramatically enhanced where the dc pumped current changes flow direction, which demonstrates the effect of the Klein paradox.

  19. On the n-symplectic structure of faithful irreducible representations

    NASA Astrophysics Data System (ADS)

    Norris, L. K.

    2017-04-01

    Each faithful irreducible representation of an N-dimensional vector space V1 on an n-dimensional vector space V2 is shown to define a unique irreducible n-symplectic structure on the product manifold V1×V2 . The basic details of the associated Poisson algebra are developed for the special case N = n2, and 2n-dimensional symplectic submanifolds are shown to exist.

  20. Nonlocal Poisson-Fermi model for ionic solvent.

    PubMed

    Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

    2016-07-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.

  1. Nonlinear Poisson Equation for Heterogeneous Media

    PubMed Central

    Hu, Langhua; Wei, Guo-Wei

    2012-01-01

    The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. PMID:22947937

  2. Coupling Poisson rectangular pulse and multiplicative microcanonical random cascade models to generate sub-daily precipitation timeseries

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph

    2018-07-01

    To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also slightly outperformed the other MRC models with respect to the intensity-frequency relationship. To assess the performance of the coupled Poisson rectangular pulse and constrained cascade model, precipitation events were stochastically generated by the Poisson rectangular pulse model and then disaggregated by the constrained cascade model. We found that the coupled model performs satisfactorily in terms of the temporal pattern of the precipitation time series, event characteristics and the intensity-frequency relationship.

  3. Obstructions for twist star products

    NASA Astrophysics Data System (ADS)

    Bieliavsky, Pierre; Esposito, Chiara; Waldmann, Stefan; Weber, Thomas

    2018-05-01

    In this short note, we point out that not every star product is induced by a Drinfel'd twist by showing that not every Poisson structure is induced by a classical r-matrix. Examples include the higher genus symplectic Pretzel surfaces and the symplectic sphere S^2.

  4. Saint-Venant end effects for materials with negative Poisson's ratios

    NASA Technical Reports Server (NTRS)

    Lakes, R. S.

    1992-01-01

    Results are presented from an analysis of Saint-Venant end effects for materials with negative Poisson's ratio. Examples are presented showing that slow decay of end stress occurs in circular cylinders of negative Poisson's ratio, whereas a sandwich panel containing rigid face sheets and a compliant core exhibits no anomalous effects for negative Poisson's ratio (but exhibits slow stress decay for core Poisson's ratios approaching 0.5). In sand panels with stiff but not perfectly rigid face sheets, a negative Poisson's ratio results in end stress decay, which is faster than it would be otherwise. It is suggested that the slow decay previously predicted for sandwich strips in plane deformation as a result of the geometry can be mitigated by the use of a negative Poisson's ratio material for the core.

  5. Poisson's ratio of fiber-reinforced composites

    NASA Astrophysics Data System (ADS)

    Christiansson, Henrik; Helsing, Johan

    1996-05-01

    Poisson's ratio flow diagrams, that is, the Poisson's ratio versus the fiber fraction, are obtained numerically for hexagonal arrays of elastic circular fibers in an elastic matrix. High numerical accuracy is achieved through the use of an interface integral equation method. Questions concerning fixed point theorems and the validity of existing asymptotic relations are investigated and partially resolved. Our findings for the transverse effective Poisson's ratio, together with earlier results for random systems by other authors, make it possible to formulate a general statement for Poisson's ratio flow diagrams: For composites with circular fibers and where the phase Poisson's ratios are equal to 1/3, the system with the lowest stiffness ratio has the highest Poisson's ratio. For other choices of the elastic moduli for the phases, no simple statement can be made.

  6. Semi-metallic Be5C2 monolayer global minimum with quasi-planar pentacoordinate carbons and negative Poisson's ratio.

    PubMed

    Wang, Yu; Li, Feng; Li, Yafei; Chen, Zhongfang

    2016-05-03

    Designing new materials with novel topological properties and reduced dimensionality is always desirable for material innovation. Here we report the design of a two-dimensional material, namely Be5C2 monolayer on the basis of density functional theory computations. In Be5C2 monolayer, each carbon atom binds with five beryllium atoms in almost the same plane, forming a quasi-planar pentacoordinate carbon moiety. Be5C2 monolayer appears to have good stability as revealed by its moderate cohesive energy, positive phonon modes and high melting point. It is the lowest-energy structure with the Be5C2 stoichiometry in two-dimensional space and therefore holds some promise to be realized experimentally. Be5C2 monolayer is a gapless semiconductor with a Dirac-like point in the band structure and also has an unusual negative Poisson's ratio. If synthesized, Be5C2 monolayer may find applications in electronics and mechanics.

  7. Lie-Hamilton systems on the plane: Properties, classification and applications

    NASA Astrophysics Data System (ADS)

    Ballesteros, A.; Blasco, A.; Herranz, F. J.; de Lucas, J.; Sardón, C.

    2015-04-01

    We study Lie-Hamilton systems on the plane, i.e. systems of first-order differential equations describing the integral curves of a t-dependent vector field taking values in a finite-dimensional real Lie algebra of planar Hamiltonian vector fields with respect to a Poisson structure. We start with the local classification of finite-dimensional real Lie algebras of vector fields on the plane obtained in González-López, Kamran, and Olver (1992) [23] and we interpret their results as a local classification of Lie systems. By determining which of these real Lie algebras consist of Hamiltonian vector fields relative to a Poisson structure, we provide the complete local classification of Lie-Hamilton systems on the plane. We present and study through our results new Lie-Hamilton systems of interest which are used to investigate relevant non-autonomous differential equations, e.g. we get explicit local diffeomorphisms between such systems. We also analyse biomathematical models, the Milne-Pinney equations, second-order Kummer-Schwarz equations, complex Riccati equations and Buchdahl equations.

  8. Statistical Analyses of Raw Material Data for MTM45-1/CF7442A-36% RW: CMH Cure Cycle

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula; Pai, Shantaram, S.; Murthy, Pappu

    2013-01-01

    This report describes statistical characterization of physical properties of the composite material system MTM45-1/CF7442A, which has been tested and is currently being considered for use on spacecraft structures. This composite system is made of 6K plain weave graphite fibers in a highly toughened resin system. This report summarizes the distribution types and statistical details of the tests and the conditions for the experimental data generated. These distributions will be used in multivariate regression analyses to help determine material and design allowables for similar material systems and to establish a procedure for other material systems. Additionally, these distributions will be used in future probabilistic analyses of spacecraft structures. The specific properties that are characterized are the ultimate strength, modulus, and Poisson??s ratio by using a commercially available statistical package. Results are displayed using graphical and semigraphical methods and are included in the accompanying appendixes.

  9. Characterization of Nonhomogeneous Poisson Processes Via Moment Conditions.

    DTIC Science & Technology

    1986-08-01

    Poisson processes play an important role in many fields. The Poisson process is one of the simplest counting processes and is a building block for...place of independent increments. This provides a somewhat different viewpoint for examining Poisson processes . In addition, new characterizations for

  10. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  11. Constructions and classifications of projective Poisson varieties.

    PubMed

    Pym, Brent

    2018-01-01

    This paper is intended both as an introduction to the algebraic geometry of holomorphic Poisson brackets, and as a survey of results on the classification of projective Poisson manifolds that have been obtained in the past 20 years. It is based on the lecture series delivered by the author at the Poisson 2016 Summer School in Geneva. The paper begins with a detailed treatment of Poisson surfaces, including adjunction, ruled surfaces and blowups, and leading to a statement of the full birational classification. We then describe several constructions of Poisson threefolds, outlining the classification in the regular case, and the case of rank-one Fano threefolds (such as projective space). Following a brief introduction to the notion of Poisson subspaces, we discuss Bondal's conjecture on the dimensions of degeneracy loci on Poisson Fano manifolds. We close with a discussion of log symplectic manifolds with simple normal crossings degeneracy divisor, including a new proof of the classification in the case of rank-one Fano manifolds.

  12. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  13. Constructions and classifications of projective Poisson varieties

    NASA Astrophysics Data System (ADS)

    Pym, Brent

    2018-03-01

    This paper is intended both as an introduction to the algebraic geometry of holomorphic Poisson brackets, and as a survey of results on the classification of projective Poisson manifolds that have been obtained in the past 20 years. It is based on the lecture series delivered by the author at the Poisson 2016 Summer School in Geneva. The paper begins with a detailed treatment of Poisson surfaces, including adjunction, ruled surfaces and blowups, and leading to a statement of the full birational classification. We then describe several constructions of Poisson threefolds, outlining the classification in the regular case, and the case of rank-one Fano threefolds (such as projective space). Following a brief introduction to the notion of Poisson subspaces, we discuss Bondal's conjecture on the dimensions of degeneracy loci on Poisson Fano manifolds. We close with a discussion of log symplectic manifolds with simple normal crossings degeneracy divisor, including a new proof of the classification in the case of rank-one Fano manifolds.

  14. Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method

    PubMed Central

    Zhang, Tingting; Kou, S. C.

    2010-01-01

    Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure. PMID:21258615

  15. Sparse representation and dictionary learning penalized image reconstruction for positron emission tomography.

    PubMed

    Chen, Shuhang; Liu, Huafeng; Shi, Pengcheng; Chen, Yunmei

    2015-01-21

    Accurate and robust reconstruction of the radioactivity concentration is of great importance in positron emission tomography (PET) imaging. Given the Poisson nature of photo-counting measurements, we present a reconstruction framework that integrates sparsity penalty on a dictionary into a maximum likelihood estimator. Patch-sparsity on a dictionary provides the regularization for our effort, and iterative procedures are used to solve the maximum likelihood function formulated on Poisson statistics. Specifically, in our formulation, a dictionary could be trained on CT images, to provide intrinsic anatomical structures for the reconstructed images, or adaptively learned from the noisy measurements of PET. Accuracy of the strategy with very promising application results from Monte-Carlo simulations, and real data are demonstrated.

  16. Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method.

    PubMed

    Zhang, Tingting; Kou, S C

    2010-01-01

    Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure.

  17. Extended generalized geometry and a DBI-type effective action for branes ending on branes

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Schupp, Peter; Vysoký, Jan

    2014-08-01

    Starting from the Nambu-Goto bosonic membrane action, we develop a geometric description suitable for p-brane backgrounds. With tools of generalized geometry we derive the pertinent generalization of the string open-closed relations to the p-brane case. Nambu-Poisson structures are used in this context to generalize the concept of semi-classical noncommutativity of D-branes governed by a Poisson tensor. We find a natural description of the correspondence of recently proposed commutative and noncommutative versions of an effective action for p-branes ending on a p '-brane. We calculate the power series expansion of the action in background independent gauge. Leading terms in the double scaling limit are given by a generalization of a (semi-classical) matrix model.

  18. Simultaneous reconstruction and segmentation for dynamic SPECT imaging

    NASA Astrophysics Data System (ADS)

    Burger, Martin; Rossmanith, Carolin; Zhang, Xiaoqun

    2016-10-01

    This work deals with the reconstruction of dynamic images that incorporate characteristic dynamics in certain subregions, as arising for the kinetics of many tracers in emission tomography (SPECT, PET). We make use of a basis function approach for the unknown tracer concentration by assuming that the region of interest can be divided into subregions with spatially constant concentration curves. Applying a regularised variational framework reminiscent of the Chan-Vese model for image segmentation we simultaneously reconstruct both the labelling functions of the subregions as well as the subconcentrations within each region. Our particular focus is on applications in SPECT with the Poisson noise model, resulting in a Kullback-Leibler data fidelity in the variational approach. We present a detailed analysis of the proposed variational model and prove existence of minimisers as well as error estimates. The latter apply to a more general class of problems and generalise existing results in literature since we deal with a nonlinear forward operator and a nonquadratic data fidelity. A computational algorithm based on alternating minimisation and splitting techniques is developed for the solution of the problem and tested on appropriately designed synthetic data sets. For those we compare the results to those of standard EM reconstructions and investigate the effects of Poisson noise in the data.

  19. Grid-Based Surface Generalized Born Model for Calculation of Electrostatic Binding Free Energies.

    PubMed

    Forouzesh, Negin; Izadi, Saeed; Onufriev, Alexey V

    2017-10-23

    Fast and accurate calculation of solvation free energies is central to many applications, such as rational drug design. In this study, we present a grid-based molecular surface implementation of "R6" flavor of the generalized Born (GB) implicit solvent model, named GBNSR6. The speed, accuracy relative to numerical Poisson-Boltzmann treatment, and sensitivity to grid surface parameters are tested on a set of 15 small protein-ligand complexes and a set of biomolecules in the range of 268 to 25099 atoms. Our results demonstrate that the proposed model provides a relatively successful compromise between the speed and accuracy of computing polar components of the solvation free energies (ΔG pol ) and binding free energies (ΔΔG pol ). The model tolerates a relatively coarse grid size h = 0.5 Å, where the grid artifact error in computing ΔΔG pol remains in the range of k B T ∼ 0.6 kcal/mol. The estimated ΔΔG pol s are well correlated (r 2 = 0.97) with the numerical Poisson-Boltzmann reference, while showing virtually no systematic bias and RMSE = 1.43 kcal/mol. The grid-based GBNSR6 model is available in Amber (AmberTools) package of molecular simulation programs.

  20. The application of wavelet denoising in material discrimination system

    NASA Astrophysics Data System (ADS)

    Fu, Kenneth; Ranta, Dale; Guest, Clark; Das, Pankaj

    2010-01-01

    Recently, the need for cargo inspection imaging systems to provide a material discrimination function has become desirable. This is done by scanning the cargo container with x-rays at two different energy levels. The ratio of attenuations of the two energy scans can provide information on the composition of the material. However, with the statistical error from noise, the accuracy of such systems can be low. Because the moving source emits two energies of x-rays alternately, images from the two scans will not be identical. That means edges of objects in the two images are not perfectly aligned. Moreover, digitization creates blurry-edge artifacts. Different energy x-rays produce different edge spread functions. Those combined effects contribute to a source of false classification namely, the "edge effect." Other types of false classification are caused by noise, mainly Poisson noise associated with photons. The Poisson noise in xray images can be dealt with using either a Wiener filter or a wavelet shrinkage denoising approach. In this paper, we propose a method that uses the wavelet shrinkage denoising approach to enhance the performance of the material identification system. Test results show that this wavelet-based approach has improved performance in object detection and eliminating false positives due to the edge effects.

  1. Heavy Ion Irradiation Fluence Dependence for Single-Event Upsets in a NAND Flash Memory

    NASA Technical Reports Server (NTRS)

    Chen, Dakai; Wilcox, Edward; Ladbury, Raymond L.; Kim, Hak; Phan, Anthony; Seidleck, Christina; Label, Kenneth

    2016-01-01

    We investigated the single-event effect (SEE) susceptibility of the Micron 16 nm NAND flash, and found that the single-event upset (SEU) cross section varied inversely with cumulative fluence. We attribute the effect to the variable upset sensitivities of the memory cells. Furthermore, the effect impacts only single cell upsets in general. The rate of multiple-bit upsets remained relatively constant with fluence. The current test standards and procedures assume that SEU follow a Poisson process and do not take into account the variability in the error rate with fluence. Therefore, traditional SEE testing techniques may underestimate the on-orbit event rate for a device with variable upset sensitivity.

  2. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    PubMed

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  3. Systematic design of 3D auxetic lattice materials with programmable Poisson's ratio for finite strains

    NASA Astrophysics Data System (ADS)

    Wang, Fengwen

    2018-05-01

    This paper presents a systematic approach for designing 3D auxetic lattice materials, which exhibit constant negative Poisson's ratios over large strain intervals. A unit cell model mimicking tensile tests is established and based on the proposed model, the secant Poisson's ratio is defined as the negative ratio between the lateral and the longitudinal engineering strains. The optimization problem for designing a material unit cell with a target Poisson's ratio is formulated to minimize the average lateral engineering stresses under the prescribed deformations. Numerical results demonstrate that 3D auxetic lattice materials with constant Poisson's ratios can be achieved by the proposed optimization formulation and that two sets of material architectures are obtained by imposing different symmetry on the unit cell. Moreover, inspired by the topology-optimized material architecture, a subsequent shape optimization is proposed by parametrizing material architectures using super-ellipsoids. By designing two geometrical parameters, simple optimized material microstructures with different target Poisson's ratios are obtained. By interpolating these two parameters as polynomial functions of Poisson's ratios, material architectures for any Poisson's ratio in the interval of ν ∈ [ - 0.78 , 0.00 ] are explicitly presented. Numerical evaluations show that interpolated auxetic lattice materials exhibit constant Poisson's ratios in the target strain interval of [0.00, 0.20] and that 3D auxetic lattice material architectures with programmable Poisson's ratio are achievable.

  4. Nonlinear Poisson equation for heterogeneous media.

    PubMed

    Hu, Langhua; Wei, Guo-Wei

    2012-08-22

    The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  5. Non-linear Mechanics of Three-dimensional Architected Materials; Design of Soft and Functional Systems and Structures

    NASA Astrophysics Data System (ADS)

    Babaee, Sahab

    In the search for materials with new properties, there have been significant advances in recent years aimed at the construction of architected materials whose behavior is governed by structure, rather than composition. Through careful design of the material's architecture, new mechanical properties have been demonstrated, including negative Poisson's ratio, high stiffness to weight ratio and mechanical cloaking. However, most of the proposed architected materials (also known as mechanical metamaterials) have a unique structure that cannot be recon figured after fabrication, making them suitable only for a specific task. This thesis focuses on the design of architected materials that take advantage of the applied large deformation to enhance their functionality. Mechanical instabilities, which have been traditionally viewed as a failure mode with research focusing on how to avoid them, are exploited to achieve novel and tunable functionalities. In particular I demonstrate the design of mechanical metamaterials with tunable negative Poisson ratio, adaptive phononic band gaps, acoustic switches, and reconfigurable origami-inspired waveguides. Remarkably, due to large deformation capability and full reversibility of soft materials, the responses of the proposed designs are reversible, repeatable, and scale independent. The results presented here pave the way for the design of a new class of soft, active, adaptive, programmable and tunable structures and systems with unprecedented performance and improved functionalities.

  6. Robust adaptive 3-D segmentation of vessel laminae from fluorescence confocal microscope images and parallel GPU implementation.

    PubMed

    Narayanaswamy, Arunachalam; Dwarakapuram, Saritha; Bjornsson, Christopher S; Cutler, Barbara M; Shain, William; Roysam, Badrinath

    2010-03-01

    This paper presents robust 3-D algorithms to segment vasculature that is imaged by labeling laminae, rather than the lumenal volume. The signal is weak, sparse, noisy, nonuniform, low-contrast, and exhibits gaps and spectral artifacts, so adaptive thresholding and Hessian filtering based methods are not effective. The structure deviates from a tubular geometry, so tracing algorithms are not effective. We propose a four step approach. The first step detects candidate voxels using a robust hypothesis test based on a model that assumes Poisson noise and locally planar geometry. The second step performs an adaptive region growth to extract weakly labeled and fine vessels while rejecting spectral artifacts. To enable interactive visualization and estimation of features such as statistical confidence, local curvature, local thickness, and local normal, we perform the third step. In the third step, we construct an accurate mesh representation using marching tetrahedra, volume-preserving smoothing, and adaptive decimation algorithms. To enable topological analysis and efficient validation, we describe a method to estimate vessel centerlines using a ray casting and vote accumulation algorithm which forms the final step of our algorithm. Our algorithm lends itself to parallel processing, and yielded an 8 x speedup on a graphics processor (GPU). On synthetic data, our meshes had average error per face (EPF) values of (0.1-1.6) voxels per mesh face for peak signal-to-noise ratios from (110-28 dB). Separately, the error from decimating the mesh to less than 1% of its original size, the EPF was less than 1 voxel/face. When validated on real datasets, the average recall and precision values were found to be 94.66% and 94.84%, respectively.

  7. Bayesian multi-scale smoothing of photon-limited images with applications to astronomy and medicine

    NASA Astrophysics Data System (ADS)

    White, John

    Multi-scale models for smoothing Poisson signals or images have gained much attention over the past decade. A new Bayesian model is developed using the concept of the Chinese restaurant process to find structures in two-dimensional images when performing image reconstruction or smoothing. This new model performs very well when compared to other leading methodologies for the same problem. It is developed and evaluated theoretically and empirically throughout Chapter 2. The newly developed Bayesian model is extended to three-dimensional images in Chapter 3. The third dimension has numerous different applications, such as different energy spectra, another spatial index, or possibly a temporal dimension. Empirically, this method shows promise in reducing error with the use of simulation studies. A further development removes background noise in the image. This removal can further reduce the error and is done using a modeling adjustment and post-processing techniques. These details are given in Chapter 4. Applications to real world problems are given throughout. Photon-based images are common in astronomical imaging due to the collection of different types of energy such as X-Rays. Applications to real astronomical images are given, and these consist of X-ray images from the Chandra X-ray observatory satellite. Diagnostic medicine uses many types of imaging such as magnetic resonance imaging and computed tomography that can also benefit from smoothing techniques such as the one developed here. Reducing the amount of radiation a patient takes will make images more noisy, but this can be mitigated through the use of image smoothing techniques. Both types of images represent the potential real world use for these methods.

  8. Stationary and non-stationary occurrences of miniature end plate potentials are well described as stationary and non-stationary Poisson processes in the mollusc Navanax inermis.

    PubMed

    Cappell, M S; Spray, D C; Bennett, M V

    1988-06-28

    Protractor muscles in the gastropod mollusc Navanax inermis exhibit typical spontaneous miniature end plate potentials with mean amplitude 1.71 +/- 1.19 (standard deviation) mV. The evoked end plate potential is quantized, with a quantum equal to the miniature end plate potential amplitude. When their rate is stationary, occurrence of miniature end plate potentials is a random, Poisson process. When non-stationary, spontaneous miniature end plate potential occurrence is a non-stationary Poisson process, a Poisson process with the mean frequency changing with time. This extends the random Poisson model for miniature end plate potentials to the frequently observed non-stationary occurrence. Reported deviations from a Poisson process can sometimes be accounted for by the non-stationary Poisson process and more complex models, such as clustered release, are not always needed.

  9. Star products on graded manifolds and α′-corrections to Courant algebroids from string theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deser, Andreas, E-mail: andreas.deser@itp.uni-hannover.de

    2015-09-15

    Courant algebroids, originally used to study integrability conditions for Dirac structures, have turned out to be of central importance to study the effective supergravity limit of string theory. The search for a geometric description of T-duality leads to Double Field Theory (DFT), whose gauge algebra is governed by the C-bracket, a generalization of the Courant bracket in the sense that it reduces to the latter by solving a specific constraint. Recently, in DFT deformations of the C-bracket and O(d, d)-invariant bilinear form to first order in the closed string sigma model coupling, α′ were derived by analyzing the transformation propertiesmore » of the Neveu-Schwarz B-field. By choosing a particular Poisson structure on the Drinfel’d double corresponding to the Courant algebroid structure of the generalized tangent bundle, we are able to interpret the C-bracket and bilinear form in terms of Poisson brackets. As a result, we reproduce the α′-deformations for a specific solution to the strong constraint of DFT as expansion of a graded version of the Moyal-Weyl star product.« less

  10. Metaplectic-c Quantomorphisms

    NASA Astrophysics Data System (ADS)

    Vaughan, Jennifer

    2015-03-01

    In the classical Kostant-Souriau prequantization procedure, the Poisson algebra of a symplectic manifold (M,ω) is realized as the space of infinitesimal quantomorphisms of the prequantization circle bundle. Robinson and Rawnsley developed an alternative to the Kostant-Souriau quantization process in which the prequantization circle bundle and metaplectic structure for (M,ω) are replaced by a metaplectic-c prequantization. They proved that metaplectic-c quantization can be applied to a larger class of manifolds than the classical recipe. This paper presents a definition for a metaplectic-c quantomorphism, which is a diffeomorphism of metaplectic-c prequantizations that preserves all of their structures. Since the structure of a metaplectic-c prequantization is more complicated than that of a circle bundle, we find that the definition must include an extra condition that does not have an analogue in the Kostant-Souriau case. We then define an infinitesimal quantomorphism to be a vector field whose flow consists of metaplectic-c quantomorphisms, and prove that the space of infinitesimal metaplectic-c quantomorphisms exhibits all of the same properties that are seen for the infinitesimal quantomorphisms of a prequantization circle bundle. In particular, this space is isomorphic to the Poisson algebra C^∞(M).

  11. The impact of safety organizing, trusted leadership, and care pathways on reported medication errors in hospital nursing units.

    PubMed

    Vogus, Timothy J; Sutcliffe, Kathleen M

    2011-01-01

    Prior research has found that safety organizing behaviors of registered nurses (RNs) positively impact patient safety. However, little research exists on the joint benefits of safety organizing and other contextual factors that help foster safety. Although we know that organizational practices often have more powerful effects when combined with other mutually reinforcing practices, little research exists on the joint benefits of safety organizing and other contextual factors believed to foster safety. Specifically, we examined the benefits of bundling safety organizing with leadership (trust in manager) and design (use of care pathways) factors on reported medication errors. A total of 1033 RNs and 78 nurse managers in 78 emergency, internal medicine, intensive care, and surgery nursing units in 10 acute-care hospitals in Indiana, Iowa, Maryland, Michigan, and Ohio who completed questionnaires between December 2003 and June 2004. Cross-sectional analysis of medication errors reported to the hospital incident reporting system for the 6 months after the administration of the survey linked to survey data on safety organizing, trust in manager, use of care pathways, and RN characteristics and staffing. Multilevel Poisson regression analyses indicated that the benefits of safety organizing on reported medication errors were amplified when paired with high levels of trust in manager or the use of care pathways. Safety organizing plays a key role in improving patient safety on hospital nursing units especially when bundled with other organizational components of a safety supportive system.

  12. Calculation of the Poisson cumulative distribution function

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.

    1990-01-01

    A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.

  13. Poisson's Ratio of a Hyperelastic Foam Under Quasi-static and Dynamic Loading

    DOE PAGES

    Sanborn, Brett; Song, Bo

    2018-06-03

    Poisson's ratio is a material constant representing compressibility of material volume. However, when soft, hyperelastic materials such as silicone foam are subjected to large deformation into densification, the Poisson's ratio may rather significantly change, which warrants careful consideration in modeling and simulation of impact/shock mitigation scenarios where foams are used as isolators. The evolution of Poisson's ratio of silicone foam materials has not yet been characterized, particularly under dynamic loading. In this study, radial and axial measurements of specimen strain are conducted simultaneously during quasi-static and dynamic compression tests to determine the Poisson's ratio of silicone foam. The Poisson's ratiomore » of silicone foam exhibited a transition from compressible to nearly incompressible at a threshold strain that coincided with the onset of densification in the material. Poisson's ratio as a function of engineering strain was different at quasi-static and dynamic rates. Here, the Poisson's ratio behavior is presented and can be used to improve constitutive modeling of silicone foams subjected to a broad range of mechanical loading.« less

  14. Poisson's Ratio of a Hyperelastic Foam Under Quasi-static and Dynamic Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanborn, Brett; Song, Bo

    Poisson's ratio is a material constant representing compressibility of material volume. However, when soft, hyperelastic materials such as silicone foam are subjected to large deformation into densification, the Poisson's ratio may rather significantly change, which warrants careful consideration in modeling and simulation of impact/shock mitigation scenarios where foams are used as isolators. The evolution of Poisson's ratio of silicone foam materials has not yet been characterized, particularly under dynamic loading. In this study, radial and axial measurements of specimen strain are conducted simultaneously during quasi-static and dynamic compression tests to determine the Poisson's ratio of silicone foam. The Poisson's ratiomore » of silicone foam exhibited a transition from compressible to nearly incompressible at a threshold strain that coincided with the onset of densification in the material. Poisson's ratio as a function of engineering strain was different at quasi-static and dynamic rates. Here, the Poisson's ratio behavior is presented and can be used to improve constitutive modeling of silicone foams subjected to a broad range of mechanical loading.« less

  15. Predicting the thermal/structural performance of the atmospheric trace molecules spectroscopy /ATMOS/ Fourier transform spectrometer

    NASA Technical Reports Server (NTRS)

    Miller, J. M.

    1980-01-01

    ATMOS is a Fourier transform spectrometer to measure atmospheric trace molecules over a spectral range of 2-16 microns. Assessment of the system performance of ATMOS includes evaluations of optical system errors induced by thermal and structural effects. In order to assess the optical system errors induced from thermal and structural effects, error budgets are assembled during system engineering tasks and line of sight and wavefront deformations predictions (using operational thermal and vibration environments and computer models) are subsequently compared to the error budgets. This paper discusses the thermal/structural error budgets, modelling and analysis methods used to predict thermal/structural induced errors and the comparisons that show that predictions are within the error budgets.

  16. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.

  17. Poisson, Poisson-gamma and zero-inflated regression models of motor vehicle crashes: balancing statistical fit and theory.

    PubMed

    Lord, Dominique; Washington, Simon P; Ivan, John N

    2005-01-01

    There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to "excess" zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed-and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros.

  18. Spatio-temporal wildland arson crime functions

    Treesearch

    David T. Butry; Jeffrey P. Prestemon

    2005-01-01

    Wildland arson creates damages to structures and timber and affects the health and safety of people living in rural and wildland urban interface areas. We develop a model that incorporates temporal autocorrelations and spatial correlations in wildland arson ignitions in Florida. A Poisson autoregressive model of order p, or PAR(p)...

  19. A Hamiltonian electromagnetic gyrofluid model

    NASA Astrophysics Data System (ADS)

    Waelbroeck, F. L.; Hazeltine, R. D.; Morrison, P. J.

    2009-03-01

    An isothermal truncation of the electromagnetic gyrofluid model of Snyder and Hammett [Phys. Plasmas 8, 3199 (2001)] is shown to be Hamiltonian. The corresponding noncanonical Lie-Poisson bracket and its Casimir invariants are presented. The invariants are used to obtain a set of coupled Grad-Shafranov equations describing equilibria and propagating coherent structures.

  20. Household air pollution and stillbirths in India: analysis of the DLHS-II National Survey.

    PubMed

    Lakshmi, P V M; Virdi, Navkiran Kaur; Sharma, Atul; Tripathy, Jaya Prasad; Smith, Kirk R; Bates, Michael N; Kumar, Rajesh

    2013-02-01

    Several studies have linked biomass cooking fuel with adverse pregnancy outcomes such as preterm births, low birth weight and post-neonatal infant mortality, but very few have studied the associations with cooking fuel independent of other factors associated with stillbirths. We analyzed the data from 188,917 ever-married women aged 15-49 included in India's 2003-2004 District Level Household Survey-II to investigate the association between household use of cooking fuels (liquid petroleum gas/electricity, kerosene, biomass) and risk of stillbirth. Prevalence ratios (PRs) were obtained using Poisson regression with robust standard errors after controlling for several potentially confounding factors (socio-demographic and maternal health characteristics). Risk factors significantly associated with occurrence of stillbirth in the Poisson regression with robust standard errors model were: literacy status of the mother and father, lighting fuel and cooking fuel used, gravida status, history of previous abortion, whether the woman had an antenatal check up, age at last pregnancy >35 years, labor complications, bleeding complications, fetal and other complications, prematurity and home delivery. After controlling the effect of these factors, women who cook with firewood (PR 1.24; 95% CI: 1.08-1.41, p=0.003) or kerosene (PR 1.36; 95% CI: 1.10-1.67, p=0.004) were more likely to have experienced a stillbirth than those who cook with LPG/electricity. Kerosene lamp use was also associated with stillbirths compared to electric lighting (PR 1.15; 95% CI: 1.06-1.25, p=0.001). The population attributable risk of firewood as cooking fuel for stillbirths in India was 11% and 1% for kerosene cooking. Biomass and kerosene cooking fuels are associated with stillbirth occurrence in this population sample. Assuming these associations are causal, about 12% of stillbirths in India could be prevented by providing access to cleaner cooking fuel. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. A Martingale Characterization of Mixed Poisson Processes.

    DTIC Science & Technology

    1985-10-01

    03LA A 11. TITLE (Inciuae Security Clanafication, ",A martingale characterization of mixed Poisson processes " ________________ 12. PERSONAL AUTHOR... POISSON PROCESSES Jostification .......... . ... . . Di.;t ib,,jtion by Availability Codes Dietmar Pfeifer* Technical University Aachen Dist Special and...Mixed Poisson processes play an important role in many branches of applied probability, for instance in insurance mathematics and physics (see Albrecht

  2. Generation of Non-Homogeneous Poisson Processes by Thinning: Programming Considerations and Comparision with Competing Algorithms.

    DTIC Science & Technology

    1978-12-01

    Poisson processes . The method is valid for Poisson processes with any given intensity function. The basic thinning algorithm is modified to exploit several refinements which reduce computer execution time by approximately one-third. The basic and modified thinning programs are compared with the Poisson decomposition and gap-statistics algorithm, which is easily implemented for Poisson processes with intensity functions of the form exp(a sub 0 + a sub 1t + a sub 2 t-squared. The thinning programs are competitive in both execution

  3. Exact solution for the Poisson field in a semi-infinite strip.

    PubMed

    Cohen, Yossi; Rothman, Daniel H

    2017-04-01

    The Poisson equation is associated with many physical processes. Yet exact analytic solutions for the two-dimensional Poisson field are scarce. Here we derive an analytic solution for the Poisson equation with constant forcing in a semi-infinite strip. We provide a method that can be used to solve the field in other intricate geometries. We show that the Poisson flux reveals an inverse square-root singularity at a tip of a slit, and identify a characteristic length scale in which a small perturbation, in a form of a new slit, is screened by the field. We suggest that this length scale expresses itself as a characteristic spacing between tips in real Poisson networks that grow in response to fluxes at tips.

  4. Exploiting negative Poisson's ratio to design 3D-printed composites with enhanced mechanical properties

    DOE PAGES

    Li, Tiantian; Chen, Yanyu; Hu, Xiaoyi; ...

    2018-02-03

    Auxetic materials exhibiting a negative Poisson's ratio are shown to have better indentation resistance, impact shielding capability, and enhanced toughness. Here, we report a class of high-performance composites in which auxetic lattice structures are used as the reinforcements and the nearly incompressible soft material is employed as the matrix. This coupled geometry and material design concept is enabled by the state-of-the-art additive manufacturing technique. Guided by experimental tests and finite element analyses, we systematically study the compressive behavior of the 3D printed auxetics reinforced composites and achieve a significant enhancement of their stiffness and energy absorption. This improved mechanical performancemore » is due to the negative Poisson's ratio effect of the auxetic reinforcements, which makes the matrix in a state of biaxial compression and hence provides additional support. This mechanism is further supported by the investigation of the effect of auxetic degree on the stiffness and energy absorption capability. The findings reported here pave the way for developing a new class of auxetic composites that significantly expand their design space and possible applications through a combination of rational design and 3D printing.« less

  5. Variational Gaussian approximation for Poisson data

    NASA Astrophysics Data System (ADS)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  6. Monitoring Poisson's Ratio Degradation of FRP Composites under Fatigue Loading Using Biaxially Embedded FBG Sensors.

    PubMed

    Akay, Erdem; Yilmaz, Cagatay; Kocaman, Esat S; Turkmen, Halit S; Yildiz, Mehmet

    2016-09-19

    The significance of strain measurement is obvious for the analysis of Fiber-Reinforced Polymer (FRP) composites. Conventional strain measurement methods are sufficient for static testing in general. Nevertheless, if the requirements exceed the capabilities of these conventional methods, more sophisticated techniques are necessary to obtain strain data. Fiber Bragg Grating (FBG) sensors have many advantages for strain measurement over conventional ones. Thus, the present paper suggests a novel method for biaxial strain measurement using embedded FBG sensors during the fatigue testing of FRP composites. Poisson's ratio and its reduction were monitored for each cyclic loading by using embedded FBG sensors for a given specimen and correlated with the fatigue stages determined based on the variations of the applied fatigue loading and temperature due to the autogenous heating to predict an oncoming failure of the continuous fiber-reinforced epoxy matrix composite specimens under fatigue loading. The results show that FBG sensor technology has a remarkable potential for monitoring the evolution of Poisson's ratio on a cycle-by-cycle basis, which can reliably be used towards tracking the fatigue stages of composite for structural health monitoring purposes.

  7. Spatiotemporal hurdle models for zero-inflated count data: Exploring trends in emergency department visits.

    PubMed

    Neelon, Brian; Chang, Howard H; Ling, Qiang; Hastings, Nicole S

    2016-12-01

    Motivated by a study exploring spatiotemporal trends in emergency department use, we develop a class of two-part hurdle models for the analysis of zero-inflated areal count data. The models consist of two components-one for the probability of any emergency department use and one for the number of emergency department visits given use. Through a hierarchical structure, the models incorporate both patient- and region-level predictors, as well as spatially and temporally correlated random effects for each model component. The random effects are assigned multivariate conditionally autoregressive priors, which induce dependence between the components and provide spatial and temporal smoothing across adjacent spatial units and time periods, resulting in improved inferences. To accommodate potential overdispersion, we consider a range of parametric specifications for the positive counts, including truncated negative binomial and generalized Poisson distributions. We adopt a Bayesian inferential approach, and posterior computation is handled conveniently within standard Bayesian software. Our results indicate that the negative binomial and generalized Poisson hurdle models vastly outperform the Poisson hurdle model, demonstrating that overdispersed hurdle models provide a useful approach to analyzing zero-inflated spatiotemporal data. © The Author(s) 2014.

  8. Exploiting negative Poisson's ratio to design 3D-printed composites with enhanced mechanical properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Tiantian; Chen, Yanyu; Hu, Xiaoyi

    Auxetic materials exhibiting a negative Poisson's ratio are shown to have better indentation resistance, impact shielding capability, and enhanced toughness. Here, we report a class of high-performance composites in which auxetic lattice structures are used as the reinforcements and the nearly incompressible soft material is employed as the matrix. This coupled geometry and material design concept is enabled by the state-of-the-art additive manufacturing technique. Guided by experimental tests and finite element analyses, we systematically study the compressive behavior of the 3D printed auxetics reinforced composites and achieve a significant enhancement of their stiffness and energy absorption. This improved mechanical performancemore » is due to the negative Poisson's ratio effect of the auxetic reinforcements, which makes the matrix in a state of biaxial compression and hence provides additional support. This mechanism is further supported by the investigation of the effect of auxetic degree on the stiffness and energy absorption capability. The findings reported here pave the way for developing a new class of auxetic composites that significantly expand their design space and possible applications through a combination of rational design and 3D printing.« less

  9. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    PubMed

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  10. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  11. Foot Structure in Japanese Speech Errors: Normal vs. Pathological

    ERIC Educational Resources Information Center

    Miyakoda, Haruko

    2008-01-01

    Although many studies of speech errors have been presented in the literature, most have focused on errors occurring at either the segmental or feature level. Few, if any, studies have dealt with the prosodic structure of errors. This paper aims to fill this gap by taking up the issue of prosodic structure in Japanese speech errors, with a focus on…

  12. Multivariate poisson lognormal modeling of crashes by type and severity on rural two lane highways.

    PubMed

    Wang, Kai; Ivan, John N; Ravishanker, Nalini; Jackson, Eric

    2017-02-01

    In an effort to improve traffic safety, there has been considerable interest in estimating crash prediction models and identifying factors contributing to crashes. To account for crash frequency variations among crash types and severities, crash prediction models have been estimated by type and severity. The univariate crash count models have been used by researchers to estimate crashes by crash type or severity, in which the crash counts by type or severity are assumed to be independent of one another and modelled separately. When considering crash types and severities simultaneously, this may neglect the potential correlations between crash counts due to the presence of shared unobserved factors across crash types or severities for a specific roadway intersection or segment, and might lead to biased parameter estimation and reduce model accuracy. The focus on this study is to estimate crashes by both crash type and crash severity using the Integrated Nested Laplace Approximation (INLA) Multivariate Poisson Lognormal (MVPLN) model, and identify the different effects of contributing factors on different crash type and severity counts on rural two-lane highways. The INLA MVPLN model can simultaneously model crash counts by crash type and crash severity by accounting for the potential correlations among them and significantly decreases the computational time compared with a fully Bayesian fitting of the MVPLN model using Markov Chain Monte Carlo (MCMC) method. This paper describes estimation of MVPLN models for three-way stop controlled (3ST) intersections, four-way stop controlled (4ST) intersections, four-way signalized (4SG) intersections, and roadway segments on rural two-lane highways. Annual Average Daily traffic (AADT) and variables describing roadway conditions (including presence of lighting, presence of left-turn/right-turn lane, lane width and shoulder width) were used as predictors. A Univariate Poisson Lognormal (UPLN) was estimated by crash type and severity for each highway facility, and their prediction results are compared with the MVPLN model based on the Average Predicted Mean Absolute Error (APMAE) statistic. A UPLN model for total crashes was also estimated to compare the coefficients of contributing factors with the models that estimate crashes by crash type and severity. The model coefficient estimates show that the signs of coefficients for presence of left-turn lane, presence of right-turn lane, land width and speed limit are different across crash type or severity counts, which suggest that estimating crashes by crash type or severity might be more helpful in identifying crash contributing factors. The standard errors of covariates in the MVPLN model are slightly lower than the UPLN model when the covariates are statistically significant, and the crash counts by crash type and severity are significantly correlated. The model prediction comparisons illustrate that the MVPLN model outperforms the UPLN model in prediction accuracy. Therefore, when predicting crash counts by crash type and crash severity for rural two-lane highways, the MVPLN model should be considered to avoid estimation error and to account for the potential correlations among crash type counts and crash severity counts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Equilibrium structures of carbon diamond-like clusters and their elastic properties

    NASA Astrophysics Data System (ADS)

    Lisovenko, D. S.; Baimova, Yu. A.; Rysaeva, L. Kh.; Gorodtsov, V. A.; Dmitriev, S. V.

    2017-04-01

    Three-dimensional carbon diamond-like phases consisting of sp 3-hybridized atoms, obtained by linking of carcasses of fullerene-like molecules, are studied by methods of molecular dynamics modeling. For eight cubic and one hexagonal diamond-like phases on the basis of four types of fullerene-like molecules, equilibrium configurations are found and the elastic constants are calculated. The results obtained by the method of molecular dynamics are used for analytical calculations of the elastic characteristics of the diamond- like phases with the cubic and hexagonal anisotropy. It is found that, for a certain choice of the dilatation axis, three of these phases have negative Poisson's ratio, i.e., are partial auxetics. The variability of the engineering elasticity coefficients (Young's modulus, Poisson's ratio, shear modulus, and bulk modulus) is analyzed.

  14. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution.

    PubMed

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.

  15. Imfit: A Fast, Flexible Program for Astronomical Image Fitting

    NASA Astrophysics Data System (ADS)

    Erwin, Peter

    2014-08-01

    Imift is an open-source astronomical image-fitting program specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. Its object-oriented design allows new types of image components (2D surface-brightness functions) to be easily written and added to the program. Image functions provided with Imfit include Sersic, exponential, and Gaussian galaxy decompositions along with Core-Sersic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through 3D luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard chi^2 statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or the Cash statistic; the latter is particularly appropriate for cases of Poisson data in the low-count regime. The C++ source code for Imfit is available under the GNU Public License.

  16. Uncertainty based pressure reconstruction from velocity measurement with generalized least squares

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacheng; Scalo, Carlo; Vlachos, Pavlos

    2017-11-01

    A method using generalized least squares reconstruction of instantaneous pressure field from velocity measurement and velocity uncertainty is introduced and applied to both planar and volumetric flow data. Pressure gradients are computed on a staggered grid from flow acceleration. The variance-covariance matrix of the pressure gradients is evaluated from the velocity uncertainty by approximating the pressure gradient error to a linear combination of velocity errors. An overdetermined system of linear equations which relates the pressure and the computed pressure gradients is formulated and then solved using generalized least squares with the variance-covariance matrix of the pressure gradients. By comparing the reconstructed pressure field against other methods such as solving the pressure Poisson equation, the omni-directional integration, and the ordinary least squares reconstruction, generalized least squares method is found to be more robust to the noise in velocity measurement. The improvement on pressure result becomes more remarkable when the velocity measurement becomes less accurate and more heteroscedastic. The uncertainty of the reconstructed pressure field is also quantified and compared across the different methods.

  17. Poisson Coordinates.

    PubMed

    Li, Xian-Ying; Hu, Shi-Min

    2013-02-01

    Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.

  18. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  19. Peak fitting and integration uncertainties for the Aerodyne Aerosol Mass Spectrometer

    NASA Astrophysics Data System (ADS)

    Corbin, J. C.; Othman, A.; Haskins, J. D.; Allan, J. D.; Sierau, B.; Worsnop, D. R.; Lohmann, U.; Mensah, A. A.

    2015-04-01

    The errors inherent in the fitting and integration of the pseudo-Gaussian ion peaks in Aerodyne High-Resolution Aerosol Mass Spectrometers (HR-AMS's) have not been previously addressed as a source of imprecision for these instruments. This manuscript evaluates the significance of these uncertainties and proposes a method for their estimation in routine data analysis. Peak-fitting uncertainties, the most complex source of integration uncertainties, are found to be dominated by errors in m/z calibration. These calibration errors comprise significant amounts of both imprecision and bias, and vary in magnitude from ion to ion. The magnitude of these m/z calibration errors is estimated for an exemplary data set, and used to construct a Monte Carlo model which reproduced well the observed trends in fits to the real data. The empirically-constrained model is used to show that the imprecision in the fitted height of isolated peaks scales linearly with the peak height (i.e., as n1), thus contributing a constant-relative-imprecision term to the overall uncertainty. This constant relative imprecision term dominates the Poisson counting imprecision term (which scales as n0.5) at high signals. The previous HR-AMS uncertainty model therefore underestimates the overall fitting imprecision. The constant relative imprecision in fitted peak height for isolated peaks in the exemplary data set was estimated as ~4% and the overall peak-integration imprecision was approximately 5%. We illustrate the importance of this constant relative imprecision term by performing Positive Matrix Factorization (PMF) on a~synthetic HR-AMS data set with and without its inclusion. Finally, the ability of an empirically-constrained Monte Carlo approach to estimate the fitting imprecision for an arbitrary number of known overlapping peaks is demonstrated. Software is available upon request to estimate these error terms in new data sets.

  20. Elastic properties and phase transitions of Fe7C3 and new constraints on the light element budget of the Earth's inner core

    NASA Astrophysics Data System (ADS)

    Prescher, C.; Bykova, E.; Kupenko, I.; Glazyrin, K.; Kantor, A.; McCammon, C. A.; Mookherjee, M.; Miyajima, N.; Cerantola, V.; Nakajima, Y.; Prakapenka, V.; Rüffer, R.; Chumakov, A.; Dubrovinsky, L. S.

    2013-12-01

    The Earth's inner core consists mainly of iron (or iron-nickel alloy) with some amount of light element(s) whereby their nature remains controversial. Seismological data suggest that the material forming Earth's inner core (pressures over 330 GPa and temperatures above 5000 K) has an enigmatically high Poisson's ratio ~0.44, while iron or it alloys with Si, S, O, or H expected to have at appropriate thermodynamic conditions Poisson's ratio well below 0.39. We will present an experimental study on a new high pressure variant in the iron carbide system. We have synthesized and solved structure of high-pressure orthorhombic phase of o-Fe7C3, and investigated its stability and behavior at pressures over 180 GPa and temperatures above 3500 K by means of different methods including single crystal X-ray diffraction, Mössbauer spectroscopy, and nuclear resonance scattering. O-Fe7C3 is structurally stable to at least outer core conditions and demonstrates magnetic or electronic transitions at ~18 GPa and ~70 GPa. The high pressure phase of o-Fe7C3 above 70 GPa exhibits anomalous elastic properties. When extrapolated to the conditions of the Earth's inner core it shows shear wave velocities and Poisson's ratios close to the values inferred by seismological models. Our results not only support earlier works suggesting that carbon may be an important component of Earth's core, but shows that it may drastically change iron's elastic properties, thus explaining anomalous Earth's inner core elastic properties.

  1. STI Screening Uptake and Knowledge of STI Symptoms among Female Sex Workers Participating in a Community Randomized Trial in Peru

    PubMed Central

    Kohler, Pamela K.; Campos, Pablo E.; Garcia, Patricia J.; Carcamo, Cesar P.; Buendia, Clara; Hughes, James P.; Mejia, Carolina; Garnett, Geoff P.; King, K.

    2016-01-01

    This study aims to evaluate condom use, STI screening, and knowledge of STI symptoms among female sex workers (FSW) in Peru associated with sex work venue and a community randomized trial of STI control. One component of the Peru PREVEN intervention conducted mobile-team outreach to FSW to reduce STIs and increase condom use and access to government clinics for STI screening and evaluation. Prevalence ratios were calculated using multivariate Poisson regression models with robust standard errors, clustering by city. As-treated analyses were conducted to assess outcomes associated with reported exposure to the intervention. Care-seeking was more frequent in intervention communities, but differences were not statistically significant. FSW reporting exposure to the intervention had significantly higher likelihood of condom use, STI screening at public health clinics, and symptom recognition compared to those not exposed. Compared with street or bar-based FSW, brothel-based FSW reported significantly higher rates of condom use with last client, recent screening exams for STIs and HIV testing. Brothel-based FSW also more often reported knowledge of STIs and recognition of STI symptoms in women and in men. Interventions to promote STI-detection and prevention among FSW in Peru should consider structural or regulatory factors related to sex work venue. PMID:25941053

  2. Fedosov’s formal symplectic groupoids and contravariant connections

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    2006-10-01

    Using Fedosov's approach we give a geometric construction of a formal symplectic groupoid over any Poisson manifold endowed with a torsion-free Poisson contravariant connection. In the case of Kähler-Poisson manifolds this construction provides, in particular, the formal symplectic groupoids with separation of variables. We show that the dual of a semisimple Lie algebra does not admit torsion-free Poisson contravariant connections.

  3. Complete synchronization of the global coupled dynamical network induced by Poisson noises.

    PubMed

    Guo, Qing; Wan, Fangyi

    2017-01-01

    The different Poisson noise-induced complete synchronization of the global coupled dynamical network is investigated. Based on the stability theory of stochastic differential equations driven by Poisson process, we can prove that Poisson noises can induce synchronization and sufficient conditions are established to achieve complete synchronization with probability 1. Furthermore, numerical examples are provided to show the agreement between theoretical and numerical analysis.

  4. The optimal modified variational iteration method for the Lane-Emden equations with Neumann and Robin boundary conditions

    NASA Astrophysics Data System (ADS)

    Singh, Randhir; Das, Nilima; Kumar, Jitendra

    2017-06-01

    An effective analytical technique is proposed for the solution of the Lane-Emden equations. The proposed technique is based on the variational iteration method (VIM) and the convergence control parameter h . In order to avoid solving a sequence of nonlinear algebraic or complicated integrals for the derivation of unknown constant, the boundary conditions are used before designing the recursive scheme for solution. The series solutions are found which converges rapidly to the exact solution. Convergence analysis and error bounds are discussed. Accuracy, applicability of the method is examined by solving three singular problems: i) nonlinear Poisson-Boltzmann equation, ii) distribution of heat sources in the human head, iii) second-kind Lane-Emden equation.

  5. A method for the retrieval of atomic oxygen density and temperature profiles from ground-based measurements of the O(+)(2D-2P) 7320 A twilight airglow

    NASA Technical Reports Server (NTRS)

    Fennelly, J. A.; Torr, D. G.; Richards, P. G.; Torr, M. R.; Sharp, W. E.

    1991-01-01

    This paper describes a technique for extracting thermospheric profiles of the atomic-oxygen density and temperature, using ground-based measurements of the O(+)(2D-2P) doublet at 7320 and 7330 A in the twilight airglow. In this method, a local photochemical model is used to calculate the 7320-A intensity; the method also utilizes an iterative inversion procedure based on the Levenberg-Marquardt method described by Press et al. (1986). The results demonstrate that, if the measurements are only limited by errors due to Poisson noise, the altitude profiles of neutral temperature and atomic oxygen concentration can be determined accurately using currently available spectrometers.

  6. A High Order Discontinuous Galerkin Method for 2D Incompressible Flows

    NASA Technical Reports Server (NTRS)

    Liu, Jia-Guo; Shu, Chi-Wang

    1999-01-01

    In this paper we introduce a high order discontinuous Galerkin method for two dimensional incompressible flow in vorticity streamfunction formulation. The momentum equation is treated explicitly, utilizing the efficiency of the discontinuous Galerkin method The streamfunction is obtained by a standard Poisson solver using continuous finite elements. There is a natural matching between these two finite element spaces, since the normal component of the velocity field is continuous across element boundaries. This allows for a correct upwinding gluing in the discontinuous Galerkin framework, while still maintaining total energy conservation with no numerical dissipation and total enstrophy stability The method is suitable for inviscid or high Reynolds number flows. Optimal error estimates are proven and verified by numerical experiments.

  7. New variable selection methods for zero-inflated count data with applications to the substance abuse field

    PubMed Central

    Buu, Anne; Johnson, Norman J.; Li, Runze; Tan, Xianming

    2011-01-01

    Zero-inflated count data are very common in health surveys. This study develops new variable selection methods for the zero-inflated Poisson regression model. Our simulations demonstrate the negative consequences which arise from the ignorance of zero-inflation. Among the competing methods, the one-step SCAD method is recommended because it has the highest specificity, sensitivity, exact fit, and lowest estimation error. The design of the simulations is based on the special features of two large national databases commonly used in the alcoholism and substance abuse field so that our findings can be easily generalized to the real settings. Applications of the methodology are demonstrated by empirical analyses on the data from a well-known alcohol study. PMID:21563207

  8. Optimal estimation of large structure model errors. [in Space Shuttle controller design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1979-01-01

    In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.

  9. Application of the Conway-Maxwell-Poisson generalized linear model for analyzing motor vehicle crashes.

    PubMed

    Lord, Dominique; Guikema, Seth D; Geedipally, Srinivas Reddy

    2008-05-01

    This paper documents the application of the Conway-Maxwell-Poisson (COM-Poisson) generalized linear model (GLM) for modeling motor vehicle crashes. The COM-Poisson distribution, originally developed in 1962, has recently been re-introduced by statisticians for analyzing count data subjected to over- and under-dispersion. This innovative distribution is an extension of the Poisson distribution. The objectives of this study were to evaluate the application of the COM-Poisson GLM for analyzing motor vehicle crashes and compare the results with the traditional negative binomial (NB) model. The comparison analysis was carried out using the most common functional forms employed by transportation safety analysts, which link crashes to the entering flows at intersections or on segments. To accomplish the objectives of the study, several NB and COM-Poisson GLMs were developed and compared using two datasets. The first dataset contained crash data collected at signalized four-legged intersections in Toronto, Ont. The second dataset included data collected for rural four-lane divided and undivided highways in Texas. Several methods were used to assess the statistical fit and predictive performance of the models. The results of this study show that COM-Poisson GLMs perform as well as NB models in terms of GOF statistics and predictive performance. Given the fact the COM-Poisson distribution can also handle under-dispersed data (while the NB distribution cannot or has difficulties converging), which have sometimes been observed in crash databases, the COM-Poisson GLM offers a better alternative over the NB model for modeling motor vehicle crashes, especially given the important limitations recently documented in the safety literature about the latter type of model.

  10. Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.

    PubMed

    Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio

    2014-11-24

    The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine stratification.

  11. Do bacterial cell numbers follow a theoretical Poisson distribution? Comparison of experimentally obtained numbers of single cells with random number generation via computer simulation.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2016-12-01

    We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. On time-dependent Hamiltonian realizations of planar and nonplanar systems

    NASA Astrophysics Data System (ADS)

    Esen, Oğul; Guha, Partha

    2018-04-01

    In this paper, we elucidate the key role played by the cosymplectic geometry in the theory of time dependent Hamiltonian systems in 2 D. We generalize the cosymplectic structures to time-dependent Nambu-Poisson Hamiltonian systems and corresponding Jacobi's last multiplier for 3 D systems. We illustrate our constructions with various examples.

  13. Symmetries of the Space of Linear Symplectic Connections

    NASA Astrophysics Data System (ADS)

    Fox, Daniel J. F.

    2017-01-01

    There is constructed a family of Lie algebras that act in a Hamiltonian way on the symplectic affine space of linear symplectic connections on a symplectic manifold. The associated equivariant moment map is a formal sum of the Cahen-Gutt moment map, the Ricci tensor, and a translational term. The critical points of a functional constructed from it interpolate between the equations for preferred symplectic connections and the equations for critical symplectic connections. The commutative algebra of formal sums of symmetric tensors on a symplectic manifold carries a pair of compatible Poisson structures, one induced from the canonical Poisson bracket on the space of functions on the cotangent bundle polynomial in the fibers, and the other induced from the algebraic fiberwise Schouten bracket on the symmetric algebra of each fiber of the cotangent bundle. These structures are shown to be compatible, and the required Lie algebras are constructed as central extensions of their! linear combinations restricted to formal sums of symmetric tensors whose first order term is a multiple of the differential of its zeroth order term.

  14. Microstructure and micromechanical elastic properties of weak layers

    NASA Astrophysics Data System (ADS)

    Köchle, Berna; Matzl, Margret; Proksch, Martin; Schneebeli, Martin

    2014-05-01

    Weak layers are the mechanically most important stratigraphic layer for avalanches. Yet, there is little known about their exact geometry and their micromechanical properties. To distinguish weak layers or interfaces is essential to assess stability. However, except by destructive mechanical tests, they cannot be easily identified and characterized in the field. We casted natural weak layers and their adjacent layers in the field during two winter seasons and scanned them non-destructively with X-ray computer tomography with a resolution between 10 - 20 µm. Reconstructed three-dimensional models of centimeter-sized layered samples allow for calculating the change of structural properties. We found that structural transitions cannot always by expressed by geometry like density or grain size. In addition, we calculated the Young's modulus and Poisson's ratio of the individual layers with voxel-based finite element simulations. As any material has its characteristic elastic parameters, they may potentially differentiate individual layers, and therefore different microstructures. Our results show that Young's modulus correlates well with density but do not indicate snow's microstructure, in contrast to Poisson's ratio which tends to be lower for strongly anisotropic forms like cup crystals and facets.

  15. Estimating relative risks in multicenter studies with a small number of centers - which methods to use? A simulation study.

    PubMed

    Pedroza, Claudia; Truong, Van Thi Thanh

    2017-11-02

    Analyses of multicenter studies often need to account for center clustering to ensure valid inference. For binary outcomes, it is particularly challenging to properly adjust for center when the number of centers or total sample size is small, or when there are few events per center. Our objective was to evaluate the performance of generalized estimating equation (GEE) log-binomial and Poisson models, generalized linear mixed models (GLMMs) assuming binomial and Poisson distributions, and a Bayesian binomial GLMM to account for center effect in these scenarios. We conducted a simulation study with few centers (≤30) and 50 or fewer subjects per center, using both a randomized controlled trial and an observational study design to estimate relative risk. We compared the GEE and GLMM models with a log-binomial model without adjustment for clustering in terms of bias, root mean square error (RMSE), and coverage. For the Bayesian GLMM, we used informative neutral priors that are skeptical of large treatment effects that are almost never observed in studies of medical interventions. All frequentist methods exhibited little bias, and the RMSE was very similar across the models. The binomial GLMM had poor convergence rates, ranging from 27% to 85%, but performed well otherwise. The results show that both GEE models need to use small sample corrections for robust SEs to achieve proper coverage of 95% CIs. The Bayesian GLMM had similar convergence rates but resulted in slightly more biased estimates for the smallest sample sizes. However, it had the smallest RMSE and good coverage across all scenarios. These results were very similar for both study designs. For the analyses of multicenter studies with a binary outcome and few centers, we recommend adjustment for center with either a GEE log-binomial or Poisson model with appropriate small sample corrections or a Bayesian binomial GLMM with informative priors.

  16. Normal and compound poisson approximations for pattern occurrences in NGS reads.

    PubMed

    Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu

    2012-06-01

    Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/∼fsun/Programs/NGS_motif_power/NGS_motif_power.html). In addition, Supplementary Material can be found online (www.liebertonline.com/cmb).

  17. Investigation of Structural Properties of Carbon-Epoxy Composites Using Fiber-Bragg Gratings

    NASA Technical Reports Server (NTRS)

    Grant, J.; Kaul, R.; Taylor, S.; Jackson, K.; Sharma, A.; Burdine, Robert V. (Technical Monitor)

    2002-01-01

    Fiber Bragg-gratings are embedded in carbon-epoxy laminates as well as bonded on the surface of cylindrical structures fabricated out of such composites. Structural properties of such composites is investigated. The measurements include stress-strain relation in laminates and Poisson's ratio in several specimens with varying orientation of the optical fiber Bragg-sensor with respect to the carbon fiber in an epoxy matrix. Additionally, Bragg gratings are bonded on the surface of cylinders fabricated out of carbon-epoxy composites and longitudinal and hoop strain on the surface is measured.

  18. Integrability and Poisson Structures of Three Dimensional Dynamical Systems and Equations of Hydrodynamic Type

    NASA Astrophysics Data System (ADS)

    Gumral, Hasan

    Poisson structure of completely integrable 3 dimensional dynamical systems can be defined in terms of an integrable 1-form. We take advantage of this fact and use the theory of foliations in discussing the geometrical structure underlying complete and partial integrability. We show that the Halphen system can be formulated in terms of a flat SL(2,R)-valued connection and belongs to a non-trivial Godbillon-Vey class. On the other hand, for the Euler top and a special case of 3-species Lotka-Volterra equations which are contained in the Halphen system as limiting cases, this structure degenerates into the form of globally integrable bi-Hamiltonian structures. The globally integrable bi-Hamiltonian case is a linear and the sl_2 structure is a quadratic unfolding of an integrable 1-form in 3 + 1 dimensions. We complete the discussion of the Hamiltonian structure of 2-component equations of hydrodynamic type by presenting the Hamiltonian operators for Euler's equation and a continuum limit of Toda lattice. We present further infinite sequences of conserved quantities for shallow water equations and show that their generalizations by Kodama admit bi-Hamiltonian structure. We present a simple way of constructing the second Hamiltonian operators for N-component equations admitting some scaling properties. The Kodama reduction of the dispersionless-Boussinesq equations and the Lax reduction of the Benney moment equations are shown to be equivalent by a symmetry transformation. They can be cast into the form of a triplet of conservation laws which enable us to recognize a non-trivial scaling symmetry. The resulting bi-Hamiltonian structure generates three infinite sequences of conserved densities.

  19. A Method of Poisson's Ration Imaging Within a Material Part

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1994-01-01

    The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention, longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to display the data.

  20. Method of Poisson's ratio imaging within a material part

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1996-01-01

    The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to displayed the image.

  1. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution

    PubMed Central

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section. PMID:28983398

  2. Higher spin Chern-Simons theory and the super Boussinesq hierarchy

    NASA Astrophysics Data System (ADS)

    Gutperle, Michael; Li, Yi

    2018-05-01

    In this paper, we construct a map between a solution of supersymmetric Chern-Simons higher spin gravity based on the superalgebra sl(3|2) with Lifshitz scaling and the N = 2 super Boussinesq hierarchy. We show that under this map the time evolution equations of both theories coincide. In addition, we identify the Poisson structure of the Chern-Simons theory induced by gauge transformation with the second Hamiltonian structure of the super Boussinesq hierarchy.

  3. Crustal structure of Precambrian terranes in the southern African subcontinent with implications for secular variation in crustal genesis

    NASA Astrophysics Data System (ADS)

    Kachingwe, Marsella; Nyblade, Andrew; Julià, Jordi

    2015-07-01

    New estimates of crustal thickness, Poisson's ratio and crustal shear wave velocity have been obtained for 39 stations in Angola, Botswana, the Democratic Republic of Congo, Malawi, Mozambique, Namibia, Rwanda, Tanzania and Zambia by modelling P-wave receiver functions using the H-κ stacking method and jointly inverting the receiver functions with Rayleigh-wave phase and group velocities. These estimates, combined with similar results from previous studies, have been examined for secular trends in Precambrian crustal structure within the southern African subcontinent. In both Archean and Proterozoic terranes we find similar Moho depths [38-39 ± 3 km SD (standard deviation)], crustal Poisson's ratio (0.26 ± 0.01 SD), mean crustal shear wave velocity (3.7 ± 0.1 km s-1 SD), and amounts of heterogeneity in the thickness of the mafic lower crust, as defined by shear wave velocities ≥4.0 km s-1. In addition, the amount of variability in these crustal parameters is similar within each individual age grouping as between age groupings. Thus, the results provide little evidence for secular variation in Precambrian crustal structure, including between Meso- and Neoarchean crust. This finding suggests that (1) continental crustal has been generated by similar processes since the Mesoarchean or (2) plate tectonic processes have reworked and modified the crust through time, erasing variations in structure resulting from crustal genesis.

  4. Non-linear properties of metallic cellular materials with a negative Poisson's ratio

    NASA Technical Reports Server (NTRS)

    Choi, J. B.; Lakes, R. S.

    1992-01-01

    Negative Poisson's ratio copper foam was prepared and characterized experimentally. The transformation into re-entrant foam was accomplished by applying sequential permanent compressions above the yield point to achieve a triaxial compression. The Poisson's ratio of the re-entrant foam depended on strain and attained a relative minimum at strains near zero. Poisson's ratio as small as -0.8 was achieved. The strain dependence of properties occurred over a narrower range of strain than in the polymer foams studied earlier. Annealing of the foam resulted in a slightly greater magnitude of negative Poisson's ratio and greater toughness at the expense of a decrease in the Young's modulus.

  5. Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes

    NASA Astrophysics Data System (ADS)

    Orsingher, Enzo; Polito, Federico

    2012-08-01

    In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.

  6. XCAMS: The compact 14C accelerator mass spectrometer extended for 10Be and 26Al at GNS Science, New Zealand

    NASA Astrophysics Data System (ADS)

    Zondervan, A.; Hauser, T. M.; Kaiser, J.; Kitchen, R. L.; Turnbull, J. C.; West, J. G.

    2015-10-01

    A detailed description is given of the 0.5 MV tandem accelerator mass spectrometry (AMS) system for 10Be, 14C, 26Al, installed in early 2010 at GNS Science, New Zealand. Its design follows that of previously commissioned Compact 14C-only AMS (CAMS) systems based on the Pelletron tandem accelerator. The only basic departure from that design is an extension of the rare-isotope achromat with a 45° magnet and a two-anode gas-ionisation detector, to provide additional filtering for 10Be. Realised performance of the three AMS modes is discussed in terms of acceptance-test scores, 14C Poisson and non-Poisson errors, and 10Be detection limit and sensitivity. Operational details and hardware improvements, such as 10Be beam transport and particle detector setup, are highlighted. Statistics of repeat measurements of all graphitised 14C calibration cathodes since start-up show that 91% of their total uncertainty values are less than 0.3%, indicating that the rare-isotope beamline extension has not affected precision of 14C measurement. For 10Be, the limit of detection in terms of the isotopic abundance ratio 10Be/9Be is 6 × 10-15 at at-1 and the total efficiency of counting atoms in the sample cathode is 1/8500 (0.012%).

  7. Unsteady electroosmosis in a microchannel with Poisson-Boltzmann charge distribution.

    PubMed

    Chang, Chien C; Kuo, Chih-Yu; Wang, Chang-Yi

    2011-11-01

    The present study is concerned with unsteady electroosmotic flow (EOF) in a microchannel with the electric charge distribution described by the Poisson-Boltzmann (PB) equation. The nonlinear PB equation is solved by a systematic perturbation with respect to the parameter λ which measures the strength of the wall zeta potential relative to the thermal potential. In the small λ limits (λ<1), we recover the linearized PB equation - the Debye-Hückel approximation. The solutions obtained by using only three terms in the perturbation series are shown to be accurate with errors <1% for λ up to 2. The accurate solution to the PB equation is then used to solve the electrokinetic fluid transport equation for two types of unsteady flow: transient flow driven by a suddenly applied voltage and oscillatory flow driven by a time-harmonic voltage. The solution for the transient flow has important implications on EOF as an effective means for transporting electrolytes in microchannels with various electrokinetic widths. On the other hand, the solution for the oscillatory flow is shown to have important physical implications on EOF in mixing electrolytes in terms of the amplitude and phase of the resulting time-harmonic EOF rate, which depends on the applied frequency and the electrokinetic width of the microchannel as well as on the parameter λ. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Quantitative Imaging of Young's Modulus of Soft Tissues from Ultrasound Water Jet Indentation: A Finite Element Study

    PubMed Central

    Lu, Min-Hua; Mao, Rui; Lu, Yin; Liu, Zheng; Wang, Tian-Fu; Chen, Si-Ping

    2012-01-01

    Indentation testing is a widely used approach to evaluate mechanical characteristics of soft tissues quantitatively. Young's modulus of soft tissue can be calculated from the force-deformation data with known tissue thickness and Poisson's ratio using Hayes' equation. Our group previously developed a noncontact indentation system using a water jet as a soft indenter as well as the coupling medium for the propagation of high-frequency ultrasound. The novel system has shown its ability to detect the early degeneration of articular cartilage. However, there is still lack of a quantitative method to extract the intrinsic mechanical properties of soft tissue from water jet indentation. The purpose of this study is to investigate the relationship between the loading-unloading curves and the mechanical properties of soft tissues to provide an imaging technique of tissue mechanical properties. A 3D finite element model of water jet indentation was developed with consideration of finite deformation effect. An improved Hayes' equation has been derived by introducing a new scaling factor which is dependent on Poisson's ratios v, aspect ratio a/h (the radius of the indenter/the thickness of the test tissue), and deformation ratio d/h. With this model, the Young's modulus of soft tissue can be quantitatively evaluated and imaged with the error no more than 2%. PMID:22927890

  9. A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise

    NASA Astrophysics Data System (ADS)

    Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno

    2017-09-01

    While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.

  10. Physically consistent data assimilation method based on feedback control for patient-specific blood flow analysis.

    PubMed

    Ii, Satoshi; Adib, Mohd Azrul Hisham Mohd; Watanabe, Yoshiyuki; Wada, Shigeo

    2018-01-01

    This paper presents a novel data assimilation method for patient-specific blood flow analysis based on feedback control theory called the physically consistent feedback control-based data assimilation (PFC-DA) method. In the PFC-DA method, the signal, which is the residual error term of the velocity when comparing the numerical and reference measurement data, is cast as a source term in a Poisson equation for the scalar potential field that induces flow in a closed system. The pressure values at the inlet and outlet boundaries are recursively calculated by this scalar potential field. Hence, the flow field is physically consistent because it is driven by the calculated inlet and outlet pressures, without any artificial body forces. As compared with existing variational approaches, although this PFC-DA method does not guarantee the optimal solution, only one additional Poisson equation for the scalar potential field is required, providing a remarkable improvement for such a small additional computational cost at every iteration. Through numerical examples for 2D and 3D exact flow fields, with both noise-free and noisy reference data as well as a blood flow analysis on a cerebral aneurysm using actual patient data, the robustness and accuracy of this approach is shown. Moreover, the feasibility of a patient-specific practical blood flow analysis is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Universal solvation model based on solute electron density and on a continuum model of the solvent defined by the bulk dielectric constant and atomic surface tensions.

    PubMed

    Marenich, Aleksandr V; Cramer, Christopher J; Truhlar, Donald G

    2009-05-07

    We present a new continuum solvation model based on the quantum mechanical charge density of a solute molecule interacting with a continuum description of the solvent. The model is called SMD, where the "D" stands for "density" to denote that the full solute electron density is used without defining partial atomic charges. "Continuum" denotes that the solvent is not represented explicitly but rather as a dielectric medium with surface tension at the solute-solvent boundary. SMD is a universal solvation model, where "universal" denotes its applicability to any charged or uncharged solute in any solvent or liquid medium for which a few key descriptors are known (in particular, dielectric constant, refractive index, bulk surface tension, and acidity and basicity parameters). The model separates the observable solvation free energy into two main components. The first component is the bulk electrostatic contribution arising from a self-consistent reaction field treatment that involves the solution of the nonhomogeneous Poisson equation for electrostatics in terms of the integral-equation-formalism polarizable continuum model (IEF-PCM). The cavities for the bulk electrostatic calculation are defined by superpositions of nuclear-centered spheres. The second component is called the cavity-dispersion-solvent-structure term and is the contribution arising from short-range interactions between the solute and solvent molecules in the first solvation shell. This contribution is a sum of terms that are proportional (with geometry-dependent proportionality constants called atomic surface tensions) to the solvent-accessible surface areas of the individual atoms of the solute. The SMD model has been parametrized with a training set of 2821 solvation data including 112 aqueous ionic solvation free energies, 220 solvation free energies for 166 ions in acetonitrile, methanol, and dimethyl sulfoxide, 2346 solvation free energies for 318 neutral solutes in 91 solvents (90 nonaqueous organic solvents and water), and 143 transfer free energies for 93 neutral solutes between water and 15 organic solvents. The elements present in the solutes are H, C, N, O, F, Si, P, S, Cl, and Br. The SMD model employs a single set of parameters (intrinsic atomic Coulomb radii and atomic surface tension coefficients) optimized over six electronic structure methods: M05-2X/MIDI!6D, M05-2X/6-31G, M05-2X/6-31+G, M05-2X/cc-pVTZ, B3LYP/6-31G, and HF/6-31G. Although the SMD model has been parametrized using the IEF-PCM protocol for bulk electrostatics, it may also be employed with other algorithms for solving the nonhomogeneous Poisson equation for continuum solvation calculations in which the solute is represented by its electron density in real space. This includes, for example, the conductor-like screening algorithm. With the 6-31G basis set, the SMD model achieves mean unsigned errors of 0.6-1.0 kcal/mol in the solvation free energies of tested neutrals and mean unsigned errors of 4 kcal/mol on average for ions with either Gaussian03 or GAMESS.

  12. Universal Solvation Model Based on Solute Electron Density and on a Continuum Model of the Solvent Defined by the Bulk Dielectric Constant and Atomic Surface Tensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marenich, Aleksandr; Cramer, Christopher J; Truhlar, Donald G

    2009-04-30

    We present a new continuum solvation model based on the quantum mechanical charge density of a solute molecule interacting with a continuum description of the solvent. The model is called SMD, where the “D” stands for “density” to denote that the full solute electron density is used without defining partial atomic charges. “Continuum” denotes that the solvent is not represented explicitly but rather as a dielectric medium with surface tension at the solute-solvent boundary. SMD is a universal solvation model, where “universal” denotes its applicability to any charged or uncharged solute in any solvent or liquid medium for which amore » few key descriptors are known (in particular, dielectric constant, refractive index, bulk surface tension, and acidity and basicity parameters). The model separates the observable solvation free energy into two main components. The first component is the bulk electrostatic contribution arising from a self-consistent reaction field treatment that involves the solution of the nonhomogeneous Poisson equation for electrostatics in terms of the integral-equation-formalism polarizable continuum model (IEF-PCM). The cavities for the bulk electrostatic calculation are defined by superpositions of nuclear-centered spheres. The second component is called the cavity-dispersion-solvent-structure term and is the contribution arising from short-range interactions between the solute and solvent molecules in the first solvation shell. This contribution is a sum of terms that are proportional (with geometry-dependent proportionality constants called atomic surface tensions) to the solvent-accessible surface areas of the individual atoms of the solute. The SMD model has been parametrized with a training set of 2821 solvation data including 112 aqueous ionic solvation free energies, 220 solvation free energies for 166 ions in acetonitrile, methanol, and dimethyl sulfoxide, 2346 solvation free energies for 318 neutral solutes in 91 solvents (90 nonaqueous organic solvents and water), and 143 transfer free energies for 93 neutral solutes between water and 15 organic solvents. The elements present in the solutes are H, C, N, O, F, Si, P, S, Cl, and Br. The SMD model employs a single set of parameters (intrinsic atomic Coulomb radii and atomic surface tension coefficients) optimized over six electronic structure methods: M05-2X/MIDI!6D, M05-2X/6-31G*, M05-2X/6-31+G**, M05-2X/cc-pVTZ, B3LYP/6-31G*, and HF/6-31G*. Although the SMD model has been parametrized using the IEF-PCM protocol for bulk electrostatics, it may also be employed with other algorithms for solving the nonhomogeneous Poisson equation for continuum solvation calculations in which the solute is represented by its electron density in real space. This includes, for example, the conductor-like screening algorithm. With the 6-31G* basis set, the SMD model achieves mean unsigned errors of 0.6-1.0 kcal/mol in the solvation free energies of tested neutrals and mean unsigned errors of 4 kcal/mol on average for ions with either Gaussian03 or GAMESS.« less

  13. Charge Structure and Counterion Distribution in Hexagonal DNA Liquid Crystal

    PubMed Central

    Dai, Liang; Mu, Yuguang; Nordenskiöld, Lars; Lapp, Alain; van der Maarel, Johan R. C.

    2007-01-01

    A hexagonal liquid crystal of DNA fragments (double-stranded, 150 basepairs) with tetramethylammonium (TMA) counterions was investigated with small angle neutron scattering (SANS). We obtained the structure factors pertaining to the DNA and counterion density correlations with contrast matching in the water. Molecular dynamics (MD) computer simulation of a hexagonal assembly of nine DNA molecules showed that the inter-DNA distance fluctuates with a correlation time around 2 ns and a standard deviation of 8.5% of the interaxial spacing. The MD simulation also showed a minimal effect of the fluctuations in inter-DNA distance on the radial counterion density profile and significant penetration of the grooves by TMA. The radial density profile of the counterions was also obtained from a Monte Carlo (MC) computer simulation of a hexagonal array of charged rods with fixed interaxial spacing. Strong ordering of the counterions between the DNA molecules and the absence of charge fluctuations at longer wavelengths was shown by the SANS number and charge structure factors. The DNA-counterion and counterion structure factors are interpreted with the correlation functions derived from the Poisson-Boltzmann equation, MD, and MC simulation. Best agreement is observed between the experimental structure factors and the prediction based on the Poisson-Boltzmann equation and/or MC simulation. The SANS results show that TMA is too large to penetrate the grooves to a significant extent, in contrast to what is shown by MD simulation. PMID:17098791

  14. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates.

    PubMed

    Fottrell, Edward; Byass, Peter; Berhane, Yemane

    2008-03-25

    As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.

  15. Non-Poisson Processes: Regression to Equilibrium Versus Equilibrium Correlation Functions

    DTIC Science & Technology

    2004-07-07

    ARTICLE IN PRESSPhysica A 347 (2005) 268–2880378-4371/$ - doi:10.1016/j Correspo E-mail adwww.elsevier.com/locate/physaNon- Poisson processes : regression...05.40.a; 89.75.k; 02.50.Ey Keywords: Stochastic processes; Non- Poisson processes ; Liouville and Liouville-like equations; Correlation function...which is not legitimate with renewal non- Poisson processes , is a correct property if the deviation from the exponential relaxation is obtained by time

  16. Probabilistic Estimation of Rare Random Collisions in 3 Space

    DTIC Science & Technology

    2009-03-01

    extended Poisson process as a feature of probability theory. With the bulk of research in extended Poisson processes going into parame- ter estimation, the...application of extended Poisson processes to spatial processes is largely untouched. Faddy performed a short study of spatial data, but overtly...the theory of extended Poisson processes . To date, the processes are limited in that the rates only depend on the number of arrivals at some time

  17. LSI arrays for space stations

    NASA Technical Reports Server (NTRS)

    Gassaway, J. D.

    1976-01-01

    Two approaches have been taken to study CCD's and some of their fundamental limitations. First a numerical analysis approach has been developed to solve the coupled transport and Poisson's equation for a thorough analysis of charge transfer in a CCD structure. The approach is formulated by treating the minority carriers as a surface distribution at the Si-SiO2 interface and setting up coupled difference equations for the charge and the potential. The SOR method is proposed for solving the two dimensional Poisson's equation for the potential. Methods are suggested for handling the discontinuities to improve convergence. Second, CCD shift registers were fabricated with parameters which should allow complete charge transfer independent of the transfer electrode gap width. A test instrument was designed and constructed which can be used to test this, or any similar, three phase CCD shift register.

  18. Reversible dilatancy in entangled single-wire materials.

    PubMed

    Rodney, David; Gadot, Benjamin; Martinez, Oriol Riu; du Roscoat, Sabine Rolland; Orgéas, Laurent

    2016-01-01

    Designing structures that dilate rapidly in both tension and compression would benefit devices such as smart filters, actuators or fasteners. This property however requires an unusual Poisson ratio, or Poisson function at finite strains, which has to vary with applied strain and exceed the familiar bounds: less than 0 in tension and above 1/2 in compression. Here, by combining mechanical tests and discrete element simulations, we show that a simple three-dimensional architected material, made of a self-entangled single long coiled wire, behaves in between discrete and continuum media, with a large and reversible dilatancy in both tension and compression. This unusual behaviour arises from an interplay between the elongation of the coiled wire and rearrangements due to steric effects, which, unlike in traditional discrete media, are hysteretically reversible when the architecture is made of an elastic fibre.

  19. Geometrical Effects on Nonlinear Electrodiffusion in Cell Physiology

    NASA Astrophysics Data System (ADS)

    Cartailler, J.; Schuss, Z.; Holcman, D.

    2017-12-01

    We report here new electrical laws, derived from nonlinear electrodiffusion theory, about the effect of the local geometrical structure, such as curvature, on the electrical properties of a cell. We adopt the Poisson-Nernst-Planck equations for charge concentration and electric potential as a model of electrodiffusion. In the case at hand, the entire boundary is impermeable to ions and the electric field satisfies the compatibility condition of Poisson's equation. We construct an asymptotic approximation for certain singular limits to the steady-state solution in a ball with an attached cusp-shaped funnel on its surface. As the number of charge increases, they concentrate at the end of cusp-shaped funnel. These results can be used in the design of nanopipettes and help to understand the local voltage changes inside dendrites and axons with heterogeneous local geometry.

  20. De Donder-Weyl Hamiltonian formalism of MacDowell-Mansouri gravity

    NASA Astrophysics Data System (ADS)

    Berra-Montiel, Jasel; Molgado, Alberto; Serrano-Blanco, David

    2017-12-01

    We analyse the behaviour of the MacDowell-Mansouri action with internal symmetry group SO(4, 1) under the De Donder-Weyl Hamiltonian formulation. The field equations, known in this formalism as the De Donder-Weyl equations, are obtained by means of the graded Poisson-Gerstenhaber bracket structure present within the De Donder-Weyl formulation. The decomposition of the internal algebra so(4, 1)≃so(3, 1)\\oplus{R}3, 1 allows the symmetry breaking SO(4, 1)\\toSO(3, 1) , which reduces the original action to the Palatini action without the topological term. We demonstrate that, in contrast to the Lagrangian approach, this symmetry breaking can be performed indistinctly in the polysymplectic formalism either before or after the variation of the De Donder-Weyl Hamiltonian has been done, recovering Einstein’s equations via the Poisson-Gerstenhaber bracket.

  1. Application of the sine-Poisson equation in solar magnetostatics

    NASA Technical Reports Server (NTRS)

    Webb, G. M.; Zank, G. P.

    1990-01-01

    Solutions of the sine-Poisson equations are used to construct a class of isothermal magnetostatic atmospheres, with one ignorable coordinate corresponding to a uniform gravitational field in a plane geometry. The distributed current in the model (j) is directed along the x-axis, where x is the horizontal ignorable coordinate; (j) varies as the sine of the magnetostatic potential and falls off exponentially with distance vertical to the base with an e-folding distance equal to the gravitational scale height. Solutions for the magnetostatic potential A corresponding to the one-soliton, two-soliton, and breather solutions of the sine-Gordon equation are studied. Depending on the values of the free parameters in the soliton solutions, horizontally periodic magnetostatic structures are obtained possessing either a single X-type neutral point, multiple neural X-points, or solutions without X-points.

  2. The Effect of Photon Statistics and Pulse Shaping on the Performance of the Wiener Filter Crystal Identification Algorithm Applied to LabPET Phoswich Detectors

    NASA Astrophysics Data System (ADS)

    Yousefzadeh, Hoorvash Camilia; Lecomte, Roger; Fontaine, Réjean

    2012-06-01

    A fast Wiener filter-based crystal identification (WFCI) algorithm was recently developed to discriminate crystals with close scintillation decay times in phoswich detectors. Despite the promising performance of WFCI, the influence of various physical factors and electrical noise sources of the data acquisition chain (DAQ) on the crystal identification process was not fully investigated. This paper examines the effect of different noise sources, such as photon statistics, avalanche photodiode (APD) excess multiplication noise, and front-end electronic noise, as well as the influence of different shaping filters on the performance of the WFCI algorithm. To this end, a PET-like signal simulator based on a model of the LabPET DAQ, a small animal APD-based digital PET scanner, was developed. Simulated signals were generated under various noise conditions with CR-RC shapers of order 1, 3, and 5 having different time constants (τ). Applying the WFCI algorithm to these simulated signals showed that the non-stationary Poisson photon statistics is the main contributor to the identification error of WFCI algorithm. A shaping filter of order 1 with τ = 50 ns yielded the best WFCI performance (error 1%), while a longer shaping time of τ = 100 ns slightly degraded the WFCI performance (error 3%). Filters of higher orders with fast shaping time constants (10-33 ns) also produced good WFCI results (error 1.4% to 1.6%). This study shows the advantage of the pulse simulator in evaluating various DAQ conditions and confirms the influence of the detection chain on the WFCI performance.

  3. Discrete Model for the Structure and Strength of Cementitious Materials

    NASA Astrophysics Data System (ADS)

    Balopoulos, Victor D.; Archontas, Nikolaos; Pantazopoulou, Stavroula J.

    2017-12-01

    Cementitious materials are characterized by brittle behavior in direct tension and by transverse dilatation (due to microcracking) under compression. Microcracking causes increasingly larger transverse strains and a phenomenological Poisson's ratio that gradually increases to about ν =0.5 and beyond, at the limit point in compression. This behavior is due to the underlying structure of cementitious pastes which is simulated here with a discrete physical model. The computational model is generic, assembled from a statistically generated, continuous network of flaky dendrites consisting of cement hydrates that emanate from partially hydrated cement grains. In the actual amorphous material, the dendrites constitute the solid phase of the cement gel and interconnect to provide the strength and stiffness against load. The idealized dendrite solid is loaded in compression and tension to compute values for strength and Poisson's effects. Parametric studies are conducted, to calibrate the statistical parameters of the discrete model with the physical and mechanical characteristics of the material, so that the familiar experimental trends may be reproduced. The model provides a framework for the study of the mechanical behavior of the material under various states of stress and strain and can be used to model the effects of additives (e.g., fibers) that may be explicitly simulated in the discrete structure.

  4. Bi-Axial Strain Response of Structural Materials and Superconducting NB3SN Wires at 295 K, 7 K, and 4 K

    NASA Astrophysics Data System (ADS)

    Nyilas, A.; Weiss, K. P.

    2008-03-01

    A new extensometer capable of measuring diametral strains during axial loading of structural materials and superconducting composite wires has been developed. Using this new transducer it is possible to determine both the averaged axial strain and the transverse strain. The diametral extensometer with a mass of around 1 g is foreseen to be clamped onto the wire inside the averaging double extensometer sensing device system. The sensitivity of this new diametral extensometer is very high, nearly a factor of ten higher than the axial extensometer system. In addition, for structural materials and for composite materials an adjustable diametral extensometer enabling to test specimens between 5 mm and 15 mm diameter has been also developed and tested successfully at 4 K. For materials 304 L, Inconel 718, and modified Type 316LN stainless steel cast alloy the Poisson's coefficient could be determined at 295 K. Type 310 S stainless steel has been investigated at 7 K and at 4 K using the adjustable extensometer to determine the Poisson's coefficient, too. Furthermore, different types of superconducting A15 phase composite wires with diameters between 0.8 and 1.3 mm's were characterized in axial and diametral orientation.

  5. Mechanical properties of additively manufactured octagonal honeycombs.

    PubMed

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-12-01

    Honeycomb structures have found numerous applications as structural and biomedical materials due to their favourable properties such as low weight, high stiffness, and porosity. Application of additive manufacturing and 3D printing techniques allows for manufacturing of honeycombs with arbitrary shape and wall thickness, opening the way for optimizing the mechanical and physical properties for specific applications. In this study, the mechanical properties of honeycomb structures with a new geometry, called octagonal honeycomb, were investigated using analytical, numerical, and experimental approaches. An additive manufacturing technique, namely fused deposition modelling, was used to fabricate the honeycomb from polylactic acid (PLA). The honeycombs structures were then mechanically tested under compression and the mechanical properties of the structures were determined. In addition, the Euler-Bernoulli and Timoshenko beam theories were used for deriving analytical relationships for elastic modulus, yield stress, Poisson's ratio, and buckling stress of this new design of honeycomb structures. Finite element models were also created to analyse the mechanical behaviour of the honeycombs computationally. The analytical solutions obtained using Timoshenko beam theory were close to computational results in terms of elastic modulus, Poisson's ratio and yield stress, especially for relative densities smaller than 25%. The analytical solutions based on the Timoshenko analytical solution and the computational results were in good agreement with experimental observations. Finally, the elastic properties of the proposed honeycomb structure were compared to those of other honeycomb structures such as square, triangular, hexagonal, mixed, diamond, and Kagome. The octagonal honeycomb showed yield stress and elastic modulus values very close to those of regular hexagonal honeycombs and lower than the other considered honeycombs. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. JPL-ANTOPT antenna structure optimization program

    NASA Technical Reports Server (NTRS)

    Strain, D. M.

    1994-01-01

    New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.

  7. Poisson-type inequalities for growth properties of positive superharmonic functions.

    PubMed

    Luan, Kuan; Vieira, John

    2017-01-01

    In this paper, we present new Poisson-type inequalities for Poisson integrals with continuous data on the boundary. The obtained inequalities are used to obtain growth properties at infinity of positive superharmonic functions in a smooth cone.

  8. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    NASA Astrophysics Data System (ADS)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  9. Information transmission using non-poisson regular firing.

    PubMed

    Koyama, Shinsuke; Omi, Takahiro; Kass, Robert E; Shinomoto, Shigeru

    2013-04-01

    In many cortical areas, neural spike trains do not follow a Poisson process. In this study, we investigate a possible benefit of non-Poisson spiking for information transmission by studying the minimal rate fluctuation that can be detected by a Bayesian estimator. The idea is that an inhomogeneous Poisson process may make it difficult for downstream decoders to resolve subtle changes in rate fluctuation, but by using a more regular non-Poisson process, the nervous system can make rate fluctuations easier to detect. We evaluate the degree to which regular firing reduces the rate fluctuation detection threshold. We find that the threshold for detection is reduced in proportion to the coefficient of variation of interspike intervals.

  10. Local concurrent error detection and correction in data structures using virtual backpointers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C.C.J.; Chen, P.P.; Fuchs, W.K.

    1989-11-01

    A new technique, based on virtual backpointers, is presented in this paper for local concurrent error detection and correction in linked data structures. Two new data structures utilizing virtual backpointers, the Virtual Double-Linked List and the B-Tree and Virtual Backpointers, are described. For these structures, double errors within a fixed-size checking window can be detected in constant time and single errors detected during forward moves can be corrected in constant time.

  11. The influence of the structure and culture of medical group practices on prescription drug errors.

    PubMed

    Kralewski, John E; Dowd, Bryan E; Heaton, Alan; Kaissi, Amer

    2005-08-01

    This project was designed to identify the magnitude of prescription drug errors in medical group practices and to explore the influence of the practice structure and culture on those error rates. Seventy-eight practices serving an upper Midwest managed care (Care Plus) plan during 2001 were included in the study. Using Care Plus claims data, prescription drug error rates were calculated at the enrollee level and then were aggregated to the group practice that each enrollee selected to provide and manage their care. Practice structure and culture data were obtained from surveys of the practices. Data were analyzed using multivariate regression. Both the culture and the structure of these group practices appear to influence prescription drug error rates. Seeing more patients per clinic hour, more prescriptions per patient, and being cared for in a rural clinic were all strongly associated with more errors. Conversely, having a case manager program is strongly related to fewer errors in all of our analyses. The culture of the practices clearly influences error rates, but the findings are mixed. Practices with cohesive cultures have lower error rates but, contrary to our hypothesis, cultures that value physician autonomy and individuality also have lower error rates than those with a more organizational orientation. Our study supports the contention that there are a substantial number of prescription drug errors in the ambulatory care sector. Even by the strictest definition, there were about 13 errors per 100 prescriptions for Care Plus patients in these group practices during 2001. Our study demonstrates that the structure of medical group practices influences prescription drug error rates. In some cases, this appears to be a direct relationship, such as the effects of having a case manager program on fewer drug errors, but in other cases the effect appears to be indirect through the improvement of drug prescribing practices. An important aspect of this study is that it provides insights into the relationships of the structure and culture of medical group practices and prescription drug errors and provides direction for future research. Research focused on the factors influencing the high error rates in rural areas and how the interaction of practice structural and cultural attributes influence error rates would add important insights into our findings. For medical practice directors, our data show that they should focus on patient care coordination to reduce errors.

  12. Graphic Simulations of the Poisson Process.

    DTIC Science & Technology

    1982-10-01

    RANDOM NUMBERS AND TRANSFORMATIONS..o......... 11 Go THE RANDOM NUMBERGENERATOR....... .oo..... 15 III. POISSON PROCESSES USER GUIDE....oo.ooo ......... o...again. In the superimposed mode, two Poisson processes are active, each with a different rate parameter, (call them Type I and Type II with respective...occur. The value ’p’ is generated by the following equation where ’Li’ and ’L2’ are the rates of the two Poisson processes ; p = Li / (Li + L2) The value

  13. Universal Poisson Statistics of mRNAs with Complex Decay Pathways.

    PubMed

    Thattai, Mukund

    2016-01-19

    Messenger RNA (mRNA) dynamics in single cells are often modeled as a memoryless birth-death process with a constant probability per unit time that an mRNA molecule is synthesized or degraded. This predicts a Poisson steady-state distribution of mRNA number, in close agreement with experiments. This is surprising, since mRNA decay is known to be a complex process. The paradox is resolved by realizing that the Poisson steady state generalizes to arbitrary mRNA lifetime distributions. A mapping between mRNA dynamics and queueing theory highlights an identifiability problem: a measured Poisson steady state is consistent with a large variety of microscopic models. Here, I provide a rigorous and intuitive explanation for the universality of the Poisson steady state. I show that the mRNA birth-death process and its complex decay variants all take the form of the familiar Poisson law of rare events, under a nonlinear rescaling of time. As a corollary, not only steady-states but also transients are Poisson distributed. Deviations from the Poisson form occur only under two conditions, promoter fluctuations leading to transcriptional bursts or nonindependent degradation of mRNA molecules. These results place severe limits on the power of single-cell experiments to probe microscopic mechanisms, and they highlight the need for single-molecule measurements. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Effects of skilled nursing facility structure and process factors on medication errors during nursing home admission.

    PubMed

    Lane, Sandi J; Troyer, Jennifer L; Dienemann, Jacqueline A; Laditka, Sarah B; Blanchette, Christopher M

    2014-01-01

    Older adults are at greatest risk of medication errors during the transition period of the first 7 days after admission and readmission to a skilled nursing facility (SNF). The aim of this study was to evaluate structure- and process-related factors that contribute to medication errors and harm during transition periods at a SNF. Data for medication errors and potential medication errors during the 7-day transition period for residents entering North Carolina SNFs were from the Medication Error Quality Initiative-Individual Error database from October 2006 to September 2007. The impact of SNF structure and process measures on the number of reported medication errors and harm from errors were examined using bivariate and multivariate model methods. A total of 138 SNFs reported 581 transition period medication errors; 73 (12.6%) caused harm. Chain affiliation was associated with a reduction in the volume of errors during the transition period. One third of all reported transition errors occurred during the medication administration phase of the medication use process, where dose omissions were the most common type of error; however, dose omissions caused harm less often than wrong-dose errors did. Prescribing errors were much less common than administration errors but were much more likely to cause harm. Both structure and process measures of quality were related to the volume of medication errors.However, process quality measures may play a more important role in predicting harm from errors during the transition of a resident into an SNF. Medication errors during transition could be reduced by improving both prescribing processes and transcription and documentation of orders.

  15. A semi-nonparametric Poisson regression model for analyzing motor vehicle crash data.

    PubMed

    Ye, Xin; Wang, Ke; Zou, Yajie; Lord, Dominique

    2018-01-01

    This paper develops a semi-nonparametric Poisson regression model to analyze motor vehicle crash frequency data collected from rural multilane highway segments in California, US. Motor vehicle crash frequency on rural highway is a topic of interest in the area of transportation safety due to higher driving speeds and the resultant severity level. Unlike the traditional Negative Binomial (NB) model, the semi-nonparametric Poisson regression model can accommodate an unobserved heterogeneity following a highly flexible semi-nonparametric (SNP) distribution. Simulation experiments are conducted to demonstrate that the SNP distribution can well mimic a large family of distributions, including normal distributions, log-gamma distributions, bimodal and trimodal distributions. Empirical estimation results show that such flexibility offered by the SNP distribution can greatly improve model precision and the overall goodness-of-fit. The semi-nonparametric distribution can provide a better understanding of crash data structure through its ability to capture potential multimodality in the distribution of unobserved heterogeneity. When estimated coefficients in empirical models are compared, SNP and NB models are found to have a substantially different coefficient for the dummy variable indicating the lane width. The SNP model with better statistical performance suggests that the NB model overestimates the effect of lane width on crash frequency reduction by 83.1%.

  16. Transport Equation Based Wall Distance Computations Aimed at Flows With Time-Dependent Geometry

    NASA Technical Reports Server (NTRS)

    Tucker, Paul G.; Rumsey, Christopher L.; Bartels, Robert E.; Biedron, Robert T.

    2003-01-01

    Eikonal, Hamilton-Jacobi and Poisson equations can be used for economical nearest wall distance computation and modification. Economical computations may be especially useful for aeroelastic and adaptive grid problems for which the grid deforms, and the nearest wall distance needs to be repeatedly computed. Modifications are directed at remedying turbulence model defects. For complex grid structures, implementation of the Eikonal and Hamilton-Jacobi approaches is not straightforward. This prohibits their use in industrial CFD solvers. However, both the Eikonal and Hamilton-Jacobi equations can be written in advection and advection-diffusion forms, respectively. These, like the Poisson s Laplacian, are commonly occurring industrial CFD solver elements. Use of the NASA CFL3D code to solve the Eikonal and Hamilton-Jacobi equations in advective-based forms is explored. The advection-based distance equations are found to have robust convergence. Geometries studied include single and two element airfoils, wing body and double delta configurations along with a complex electronics system. It is shown that for Eikonal accuracy, upwind metric differences are required. The Poisson approach is found effective and, since it does not require offset metric evaluations, easiest to implement. The sensitivity of flow solutions to wall distance assumptions is explored. Generally, results are not greatly affected by wall distance traits.

  17. Transport Equation Based Wall Distance Computations Aimed at Flows With Time-Dependent Geometry

    NASA Technical Reports Server (NTRS)

    Tucker, Paul G.; Rumsey, Christopher L.; Bartels, Robert E.; Biedron, Robert T.

    2003-01-01

    Eikonal, Hamilton-Jacobi and Poisson equations can be used for economical nearest wall distance computation and modification. Economical computations may be especially useful for aeroelastic and adaptive grid problems for which the grid deforms, and the nearest wall distance needs to be repeatedly computed. Modifications are directed at remedying turbulence model defects. For complex grid structures, implementation of the Eikonal and Hamilton-Jacobi approaches is not straightforward. This prohibits their use in industrial CFD solvers. However, both the Eikonal and Hamilton-Jacobi equations can be written in advection and advection-diffusion forms, respectively. These, like the Poisson's Laplacian, are commonly occurring industrial CFD solver elements. Use of the NASA CFL3D code to solve the Eikonal and Hamilton-Jacobi equations in advective-based forms is explored. The advection-based distance equations are found to have robust convergence. Geometries studied include single and two element airfoils, wing body and double delta configurations along with a complex electronics system. It is shown that for Eikonal accuracy, upwind metric differences are required. The Poisson approach is found effective and, since it does not require offset metric evaluations, easiest to implement. The sensitivity of flow solutions to wall distance assumptions is explored. Generally, results are not greatly affected by wall distance traits.

  18. A Spatial Poisson Hurdle Model for Exploring Geographic Variation in Emergency Department Visits

    PubMed Central

    Neelon, Brian; Ghosh, Pulak; Loebs, Patrick F.

    2012-01-01

    Summary We develop a spatial Poisson hurdle model to explore geographic variation in emergency department (ED) visits while accounting for zero inflation. The model consists of two components: a Bernoulli component that models the probability of any ED use (i.e., at least one ED visit per year), and a truncated Poisson component that models the number of ED visits given use. Together, these components address both the abundance of zeros and the right-skewed nature of the nonzero counts. The model has a hierarchical structure that incorporates patient- and area-level covariates, as well as spatially correlated random effects for each areal unit. Because regions with high rates of ED use are likely to have high expected counts among users, we model the spatial random effects via a bivariate conditionally autoregressive (CAR) prior, which introduces dependence between the components and provides spatial smoothing and sharing of information across neighboring regions. Using a simulation study, we show that modeling the between-component correlation reduces bias in parameter estimates. We adopt a Bayesian estimation approach, and the model can be fit using standard Bayesian software. We apply the model to a study of patient and neighborhood factors influencing emergency department use in Durham County, North Carolina. PMID:23543242

  19. The solution of large multi-dimensional Poisson problems

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    The Buneman algorithm for solving Poisson problems can be adapted to solve large Poisson problems on computers with a rotating drum memory so that the computation is done with very little time lost due to rotational latency of the drum.

  20. A Legendre–Fourier spectral method with exact conservation laws for the Vlasov–Poisson system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manzini, Gianmarco; Delzanno, Gian Luca; Vencels, Juris

    In this study, we present the design and implementation of an L 2-stable spectral method for the discretization of the Vlasov–Poisson model of a collisionless plasma in one space and velocity dimension. The velocity and space dependence of the Vlasov equation are resolved through a truncated spectral expansion based on Legendre and Fourier basis functions, respectively. The Poisson equation, which is coupled to the Vlasov equation, is also resolved through a Fourier expansion. The resulting system of ordinary differential equation is discretized by the implicit second-order accurate Crank–Nicolson time discretization. The non-linear dependence between the Vlasov and Poisson equations ismore » iteratively solved at any time cycle by a Jacobian-Free Newton–Krylov method. In this work we analyze the structure of the main conservation laws of the resulting Legendre–Fourier model, e.g., mass, momentum, and energy, and prove that they are exactly satisfied in the semi-discrete and discrete setting. The L 2-stability of the method is ensured by discretizing the boundary conditions of the distribution function at the boundaries of the velocity domain by a suitable penalty term. The impact of the penalty term on the conservation properties is investigated theoretically and numerically. An implementation of the penalty term that does not affect the conservation of mass, momentum and energy, is also proposed and studied. A collisional term is introduced in the discrete model to control the filamentation effect, but does not affect the conservation properties of the system. Numerical results on a set of standard test problems illustrate the performance of the method.« less

  1. A Legendre–Fourier spectral method with exact conservation laws for the Vlasov–Poisson system

    DOE PAGES

    Manzini, Gianmarco; Delzanno, Gian Luca; Vencels, Juris; ...

    2016-04-22

    In this study, we present the design and implementation of an L 2-stable spectral method for the discretization of the Vlasov–Poisson model of a collisionless plasma in one space and velocity dimension. The velocity and space dependence of the Vlasov equation are resolved through a truncated spectral expansion based on Legendre and Fourier basis functions, respectively. The Poisson equation, which is coupled to the Vlasov equation, is also resolved through a Fourier expansion. The resulting system of ordinary differential equation is discretized by the implicit second-order accurate Crank–Nicolson time discretization. The non-linear dependence between the Vlasov and Poisson equations ismore » iteratively solved at any time cycle by a Jacobian-Free Newton–Krylov method. In this work we analyze the structure of the main conservation laws of the resulting Legendre–Fourier model, e.g., mass, momentum, and energy, and prove that they are exactly satisfied in the semi-discrete and discrete setting. The L 2-stability of the method is ensured by discretizing the boundary conditions of the distribution function at the boundaries of the velocity domain by a suitable penalty term. The impact of the penalty term on the conservation properties is investigated theoretically and numerically. An implementation of the penalty term that does not affect the conservation of mass, momentum and energy, is also proposed and studied. A collisional term is introduced in the discrete model to control the filamentation effect, but does not affect the conservation properties of the system. Numerical results on a set of standard test problems illustrate the performance of the method.« less

  2. A new method for extracting near-surface mass-density anomalies from land-based gravity data, based on a special case of Poisson's PDE at the Earth's surface: A case study of salt diapirs in the south of Iran

    NASA Astrophysics Data System (ADS)

    AllahTavakoli, Y.; Safari, A.; Ardalan, A.; Bahroudi, A.

    2015-12-01

    The current research provides a method for tracking near-surface mass-density anomalies via using only land-based gravity data, which is based on a special version of Poisson's Partial Differential Equation (PDE) of the gravitational field at Earth's surface. The research demonstrates how the Poisson's PDE can provide us with a capability to extract the near-surface mass-density anomalies from land-based gravity data. Herein, this version of the Poisson's PDE is mathematically introduced to the Earth's surface and then it is used to develop the new method for approximating the mass-density via derivatives of the Earth's gravitational field (i.e. via the gradient tensor). Herein, the author believes that the PDE can give us new knowledge about the behavior of the Earth's gravitational field at the Earth's surface which can be so useful for developing new methods of Earth's mass-density determination. In a case study, the proposed method is applied to a set of gravity stations located in the south of Iran. The results were numerically validated via certain knowledge about the geological structures in the area of the case study. Also, the method was compared with two standard methods of mass-density determination. All the numerical experiments show that the proposed approach is well-suited for tracking near-surface mass-density anomalies via using only the gravity data. Finally, the approach is also applied to some petroleum exploration studies of salt diapirs in the south of Iran.

  3. Brain, music, and non-Poisson renewal processes

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Ignaccolo, Massimiliano; Rider, Mark S.; Ross, Mary J.; Winsor, Phil; Grigolini, Paolo

    2007-06-01

    In this paper we show that both music composition and brain function, as revealed by the electroencephalogram (EEG) analysis, are renewal non-Poisson processes living in the nonergodic dominion. To reach this important conclusion we process the data with the minimum spanning tree method, so as to detect significant events, thereby building a sequence of times, which is the time series to analyze. Then we show that in both cases, EEG and music composition, these significant events are the signature of a non-Poisson renewal process. This conclusion is reached using a technique of statistical analysis recently developed by our group, the aging experiment (AE). First, we find that in both cases the distances between two consecutive events are described by nonexponential histograms, thereby proving the non-Poisson nature of these processes. The corresponding survival probabilities Ψ(t) are well fitted by stretched exponentials [ Ψ(t)∝exp (-(γt)α) , with 0.5<α<1 .] The second step rests on the adoption of AE, which shows that these are renewal processes. We show that the stretched exponential, due to its renewal character, is the emerging tip of an iceberg, whose underwater part has slow tails with an inverse power law structure with power index μ=1+α . Adopting the AE procedure we find that both EEG and music composition yield μ<2 . On the basis of the recently discovered complexity matching effect, according to which a complex system S with μS<2 responds only to a complex driving signal P with μP⩽μS , we conclude that the results of our analysis may explain the influence of music on the human brain.

  4. Protein dielectric constants determined from NMR chemical shift perturbations.

    PubMed

    Kukic, Predrag; Farrell, Damien; McIntosh, Lawrence P; García-Moreno E, Bertrand; Jensen, Kristine Steen; Toleikis, Zigmantas; Teilum, Kaare; Nielsen, Jens Erik

    2013-11-13

    Understanding the connection between protein structure and function requires a quantitative understanding of electrostatic effects. Structure-based electrostatic calculations are essential for this purpose, but their use has been limited by a long-standing discussion on which value to use for the dielectric constants (ε(eff) and ε(p)) required in Coulombic and Poisson-Boltzmann models. The currently used values for ε(eff) and ε(p) are essentially empirical parameters calibrated against thermodynamic properties that are indirect measurements of protein electric fields. We determine optimal values for ε(eff) and ε(p) by measuring protein electric fields in solution using direct detection of NMR chemical shift perturbations (CSPs). We measured CSPs in 14 proteins to get a broad and general characterization of electric fields. Coulomb's law reproduces the measured CSPs optimally with a protein dielectric constant (ε(eff)) from 3 to 13, with an optimal value across all proteins of 6.5. However, when the water-protein interface is treated with finite difference Poisson-Boltzmann calculations, the optimal protein dielectric constant (ε(p)) ranged from 2 to 5 with an optimum of 3. It is striking how similar this value is to the dielectric constant of 2-4 measured for protein powders and how different it is from the ε(p) of 6-20 used in models based on the Poisson-Boltzmann equation when calculating thermodynamic parameters. Because the value of ε(p) = 3 is obtained by analysis of NMR chemical shift perturbations instead of thermodynamic parameters such as pK(a) values, it is likely to describe only the electric field and thus represent a more general, intrinsic, and transferable ε(p) common to most folded proteins.

  5. A line-source method for aligning on-board and other pinhole SPECT systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-12-15

    Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems.Methods: An alignment model consisting of multiple alignmentmore » parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot.Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist.Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.« less

  6. A line-source method for aligning on-board and other pinhole SPECT systems

    PubMed Central

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-01-01

    Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. Methods: An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC. PMID:24320537

  7. A line-source method for aligning on-board and other pinhole SPECT systems.

    PubMed

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-12-01

    In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system-to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)-is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.

  8. On the Determination of Poisson Statistics for Haystack Radar Observations of Orbital Debris

    NASA Technical Reports Server (NTRS)

    Stokely, Christopher L.; Benbrook, James R.; Horstman, Matt

    2007-01-01

    A convenient and powerful method is used to determine if radar detections of orbital debris are observed according to Poisson statistics. This is done by analyzing the time interval between detection events. For Poisson statistics, the probability distribution of the time interval between events is shown to be an exponential distribution. This distribution is a special case of the Erlang distribution that is used in estimating traffic loads on telecommunication networks. Poisson statistics form the basis of many orbital debris models but the statistical basis of these models has not been clearly demonstrated empirically until now. Interestingly, during the fiscal year 2003 observations with the Haystack radar in a fixed staring mode, there are no statistically significant deviations observed from that expected with Poisson statistics, either independent or dependent of altitude or inclination. One would potentially expect some significant clustering of events in time as a result of satellite breakups, but the presence of Poisson statistics indicates that such debris disperse rapidly with respect to Haystack's very narrow radar beam. An exception to Poisson statistics is observed in the months following the intentional breakup of the Fengyun satellite in January 2007.

  9. Simulation Methods for Poisson Processes in Nonstationary Systems.

    DTIC Science & Technology

    1978-08-01

    for simulation of nonhomogeneous Poisson processes is stated with log-linear rate function. The method is based on an identity relating the...and relatively efficient new method for simulation of one-dimensional and two-dimensional nonhomogeneous Poisson processes is described. The method is

  10. Poisson geometry from a Dirac perspective

    NASA Astrophysics Data System (ADS)

    Meinrenken, Eckhard

    2018-03-01

    We present proofs of classical results in Poisson geometry using techniques from Dirac geometry. This article is based on mini-courses at the Poisson summer school in Geneva, June 2016, and at the workshop Quantum Groups and Gravity at the University of Waterloo, April 2016.

  11. Identification of a Class of Filtered Poisson Processes.

    DTIC Science & Technology

    1981-01-01

    LD-A135 371 IDENTIFICATION OF A CLASS OF FILERED POISSON PROCESSES I AU) NORTH CAROLINA UNIV AT CHAPEL HIL DEPT 0F STATISTICS D DE RRUC ET AL 1981...STNO&IO$ !tt ~ 4.s " . , ".7" -L N ~ TITLE :IDENTIFICATION OF A CLASS OF FILTERED POISSON PROCESSES Authors : DE BRUCQ Denis - GUALTIEROTTI Antonio...filtered Poisson processes is intro- duced : the amplitude has a law which is spherically invariant and the filter is real, linear and causal. It is shown

  12. Interactive Graphic Simulation of Rolling Element Bearings. Phase I. Low Frequency Phenomenon and RAPIDREB Development.

    DTIC Science & Technology

    1981-11-01

    RDRER413 C EH 11-22 HOUSING ELASTIC MODUJLUS (F/L**2). RDRE8415 C PO4 ?3-34 HOUSING POISSON-S PATTO . PDPR416 C DENH 35-46 HOUSING MATERIAL DFNSITY (MA/L...23-34 CAGE POISSON-S PATTO . RDPRE427 C DENC 35-46 CAC7E MATFRIAL DENSITY (MA/L-03), PDPEP4?8 C RDRER4?9 C CARD 11 RDRE9430 C ---- ROPER431 C JF 11-16

  13. Cumulative Poisson Distribution Program

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert

    1990-01-01

    Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.

  14. Modeling animal-vehicle collisions using diagonal inflated bivariate Poisson regression.

    PubMed

    Lao, Yunteng; Wu, Yao-Jan; Corey, Jonathan; Wang, Yinhai

    2011-01-01

    Two types of animal-vehicle collision (AVC) data are commonly adopted for AVC-related risk analysis research: reported AVC data and carcass removal data. One issue with these two data sets is that they were found to have significant discrepancies by previous studies. In order to model these two types of data together and provide a better understanding of highway AVCs, this study adopts a diagonal inflated bivariate Poisson regression method, an inflated version of bivariate Poisson regression model, to fit the reported AVC and carcass removal data sets collected in Washington State during 2002-2006. The diagonal inflated bivariate Poisson model not only can model paired data with correlation, but also handle under- or over-dispersed data sets as well. Compared with three other types of models, double Poisson, bivariate Poisson, and zero-inflated double Poisson, the diagonal inflated bivariate Poisson model demonstrates its capability of fitting two data sets with remarkable overlapping portions resulting from the same stochastic process. Therefore, the diagonal inflated bivariate Poisson model provides researchers a new approach to investigating AVCs from a different perspective involving the three distribution parameters (λ(1), λ(2) and λ(3)). The modeling results show the impacts of traffic elements, geometric design and geographic characteristics on the occurrences of both reported AVC and carcass removal data. It is found that the increase of some associated factors, such as speed limit, annual average daily traffic, and shoulder width, will increase the numbers of reported AVCs and carcass removals. Conversely, the presence of some geometric factors, such as rolling and mountainous terrain, will decrease the number of reported AVCs. Published by Elsevier Ltd.

  15. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, C. C.; Chen, P. P.; Fuchs, W. K.

    1987-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.

  16. Hamiltonian structure of three-dimensional gravity in Vielbein formalism

    NASA Astrophysics Data System (ADS)

    Hajihashemi, Mahdi; Shirzad, Ahmad

    2018-01-01

    Considering Chern-Simons like gravity theories in three dimensions as first order systems, we analyze the Hamiltonian structure of three theories Topological massive gravity, New massive gravity, and Zwei-Dreibein Gravity. We show that these systems demonstrate a new feature of the constrained systems in which a new kind of constraints emerge due to factorization of determinant of the matrix of Poisson brackets of constraints. We find the desired number of degrees of freedom as well as the generating functional of local Lorentz transformations and diffeomorphism through canonical structure of the system. We also compare the Hamiltonian structure of linearized version of the considered models with the original ones.

  17. Identification d’une Classe de Processus de Poisson Filtres (Identification of a Class of Filtered Poisson Processes).

    DTIC Science & Technology

    1983-05-20

    Poisson processes is introduced: the amplitude has a law which is spherically invariant and the filter is real, linear and causal. It is shown how such a model can be identified from experimental data. (Author)

  18. Optimisation of GaN LEDs and the reduction of efficiency droop using active machine learning

    DOE PAGES

    Rouet-Leduc, Bertrand; Barros, Kipton Marcos; Lookman, Turab; ...

    2016-04-26

    A fundamental challenge in the design of LEDs is to maximise electro-luminescence efficiency at high current densities. We simulate GaN-based LED structures that delay the onset of efficiency droop by spreading carrier concentrations evenly across the active region. Statistical analysis and machine learning effectively guide the selection of the next LED structure to be examined based upon its expected efficiency as well as model uncertainty. This active learning strategy rapidly constructs a model that predicts Poisson-Schrödinger simulations of devices, and that simultaneously produces structures with higher simulated efficiencies.

  19. Order-disorder effects on the elastic properties of CuMPt6 (M=Cr and Co) compounds

    NASA Astrophysics Data System (ADS)

    Huang, Shuo; Li, Rui-Zi; Qi, San-Tao; Chen, Bao; Shen, Jiang

    2014-04-01

    The elastic properties of CuMPt6 (M=Cr and Co) in disordered face-centered cubic (fcc) structure and ordered Cu3Au-type structure are studied with lattice inversion embedded-atom method. The calculated lattice constant and Debye temperature agree quite well with the comparable experimental data. The obtained formation enthalpy demonstrates that the Cu3Au-type structure is energetically more favorable. Numerical estimates of the elastic constants, bulk/shear modulus, Young's modulus, Poisson's ratio, elastic anisotropy, and Debye temperature for both compounds are performed, and the results suggest that the disordered fcc structure is much softer than the ordered Cu3Au-type structure.

  20. Quasi-Likelihood Techniques in a Logistic Regression Equation for Identifying Simulium damnosum s.l. Larval Habitats Intra-cluster Covariates in Togo.

    PubMed

    Jacob, Benjamin G; Novak, Robert J; Toe, Laurent; Sanfo, Moussa S; Afriyie, Abena N; Ibrahim, Mohammed A; Griffith, Daniel A; Unnasch, Thomas R

    2012-01-01

    The standard methods for regression analyses of clustered riverine larval habitat data of Simulium damnosum s.l. a major black-fly vector of Onchoceriasis, postulate models relating observational ecological-sampled parameter estimators to prolific habitats without accounting for residual intra-cluster error correlation effects. Generally, this correlation comes from two sources: (1) the design of the random effects and their assumed covariance from the multiple levels within the regression model; and, (2) the correlation structure of the residuals. Unfortunately, inconspicuous errors in residual intra-cluster correlation estimates can overstate precision in forecasted S.damnosum s.l. riverine larval habitat explanatory attributes regardless how they are treated (e.g., independent, autoregressive, Toeplitz, etc). In this research, the geographical locations for multiple riverine-based S. damnosum s.l. larval ecosystem habitats sampled from 2 pre-established epidemiological sites in Togo were identified and recorded from July 2009 to June 2010. Initially the data was aggregated into proc genmod. An agglomerative hierarchical residual cluster-based analysis was then performed. The sampled clustered study site data was then analyzed for statistical correlations using Monthly Biting Rates (MBR). Euclidean distance measurements and terrain-related geomorphological statistics were then generated in ArcGIS. A digital overlay was then performed also in ArcGIS using the georeferenced ground coordinates of high and low density clusters stratified by Annual Biting Rates (ABR). This data was overlain onto multitemporal sub-meter pixel resolution satellite data (i.e., QuickBird 0.61m wavbands ). Orthogonal spatial filter eigenvectors were then generated in SAS/GIS. Univariate and non-linear regression-based models (i.e., Logistic, Poisson and Negative Binomial) were also employed to determine probability distributions and to identify statistically significant parameter estimators from the sampled data. Thereafter, Durbin-Watson test statistics were used to test the null hypothesis that the regression residuals were not autocorrelated against the alternative that the residuals followed an autoregressive process in AUTOREG. Bayesian uncertainty matrices were also constructed employing normal priors for each of the sampled estimators in PROC MCMC. The residuals revealed both spatially structured and unstructured error effects in the high and low ABR-stratified clusters. The analyses also revealed that the estimators, levels of turbidity and presence of rocks were statistically significant for the high-ABR-stratified clusters, while the estimators distance between habitats and floating vegetation were important for the low-ABR-stratified cluster. Varying and constant coefficient regression models, ABR- stratified GIS-generated clusters, sub-meter resolution satellite imagery, a robust residual intra-cluster diagnostic test, MBR-based histograms, eigendecomposition spatial filter algorithms and Bayesian matrices can enable accurate autoregressive estimation of latent uncertainity affects and other residual error probabilities (i.e., heteroskedasticity) for testing correlations between georeferenced S. damnosum s.l. riverine larval habitat estimators. The asymptotic distribution of the resulting residual adjusted intra-cluster predictor error autocovariate coefficients can thereafter be established while estimates of the asymptotic variance can lead to the construction of approximate confidence intervals for accurately targeting productive S. damnosum s.l habitats based on spatiotemporal field-sampled count data.

  1. Algorithm Calculates Cumulative Poisson Distribution

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

    1992-01-01

    Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

  2. A Fourier spectral-discontinuous Galerkin method for time-dependent 3-D Schrödinger-Poisson equations with discontinuous potentials

    NASA Astrophysics Data System (ADS)

    Lu, Tiao; Cai, Wei

    2008-10-01

    In this paper, we propose a high order Fourier spectral-discontinuous Galerkin method for time-dependent Schrödinger-Poisson equations in 3-D spaces. The Fourier spectral Galerkin method is used for the two periodic transverse directions and a high order discontinuous Galerkin method for the longitudinal propagation direction. Such a combination results in a diagonal form for the differential operators along the transverse directions and a flexible method to handle the discontinuous potentials present in quantum heterojunction and supperlattice structures. As the derivative matrices are required for various time integration schemes such as the exponential time differencing and Crank Nicholson methods, explicit derivative matrices of the discontinuous Galerkin method of various orders are derived. Numerical results, using the proposed method with various time integration schemes, are provided to validate the method.

  3. Study of the Anisotropic Elastoplastic Properties of β-Ga2O3 Films Synthesized on SiC/Si Substrates

    NASA Astrophysics Data System (ADS)

    Grashchenko, A. S.; Kukushkin, S. A.; Nikolaev, V. I.; Osipov, A. V.; Osipova, E. V.; Soshnikov, I. P.

    2018-05-01

    The structural and mechanical properties of gallium oxide films grown on silicon crystallographic planes (001), (011), and (111) with a buffer layer of silicon carbide are investigated. Nanoindentation was used to study the elastoplastic properties of gallium oxide and also to determine the elastic recovery parameter of the films under study. The tensile strength, hardness, elasticity tensor, compliance tensor, Young's modulus, Poisson's ratio, and other characteristics of gallium oxide were calculated using quantum chemistry methods. It was found that the gallium oxide crystal is auxetic because, for some stretching directions, the Poisson's ratio takes on negative values. The calculated values correspond quantitatively to the experimental data. It is concluded that the elastoplastic properties of gallium oxide films approximately correspond to the properties of bulk crystals and that a change in the orientation of the silicon surface leads to a significant change in the orientation of gallium oxide.

  4. Primer ID Validates Template Sampling Depth and Greatly Reduces the Error Rate of Next-Generation Sequencing of HIV-1 Genomic RNA Populations

    PubMed Central

    Zhou, Shuntai; Jones, Corbin; Mieczkowski, Piotr

    2015-01-01

    ABSTRACT Validating the sampling depth and reducing sequencing errors are critical for studies of viral populations using next-generation sequencing (NGS). We previously described the use of Primer ID to tag each viral RNA template with a block of degenerate nucleotides in the cDNA primer. We now show that low-abundance Primer IDs (offspring Primer IDs) are generated due to PCR/sequencing errors. These artifactual Primer IDs can be removed using a cutoff model for the number of reads required to make a template consensus sequence. We have modeled the fraction of sequences lost due to Primer ID resampling. For a typical sequencing run, less than 10% of the raw reads are lost to offspring Primer ID filtering and resampling. The remaining raw reads are used to correct for PCR resampling and sequencing errors. We also demonstrate that Primer ID reveals bias intrinsic to PCR, especially at low template input or utilization. cDNA synthesis and PCR convert ca. 20% of RNA templates into recoverable sequences, and 30-fold sequence coverage recovers most of these template sequences. We have directly measured the residual error rate to be around 1 in 10,000 nucleotides. We use this error rate and the Poisson distribution to define the cutoff to identify preexisting drug resistance mutations at low abundance in an HIV-infected subject. Collectively, these studies show that >90% of the raw sequence reads can be used to validate template sampling depth and to dramatically reduce the error rate in assessing a genetically diverse viral population using NGS. IMPORTANCE Although next-generation sequencing (NGS) has revolutionized sequencing strategies, it suffers from serious limitations in defining sequence heterogeneity in a genetically diverse population, such as HIV-1 due to PCR resampling and PCR/sequencing errors. The Primer ID approach reveals the true sampling depth and greatly reduces errors. Knowing the sampling depth allows the construction of a model of how to maximize the recovery of sequences from input templates and to reduce resampling of the Primer ID so that appropriate multiplexing can be included in the experimental design. With the defined sampling depth and measured error rate, we are able to assign cutoffs for the accurate detection of minority variants in viral populations. This approach allows the power of NGS to be realized without having to guess about sampling depth or to ignore the problem of PCR resampling, while also being able to correct most of the errors in the data set. PMID:26041299

  5. Method for Real-Time Model Based Structural Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Urnes, James M., Sr. (Inventor); Smith, Timothy A. (Inventor); Reichenbach, Eric Y. (Inventor)

    2015-01-01

    A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.

  6. State Estimation for Linear Systems Driven Simultaneously by Wiener and Poisson Processes.

    DTIC Science & Technology

    1978-12-01

    The state estimation problem of linear stochastic systems driven simultaneously by Wiener and Poisson processes is considered, especially the case...where the incident intensities of the Poisson processes are low and the system is observed in an additive white Gaussian noise. The minimum mean squared

  7. The Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.

    ERIC Educational Resources Information Center

    Everett, James E.

    1993-01-01

    Addresses objections to the validity of assuming a Poisson loglinear model as the generating process for citations from one journal into another. Fluctuations in citation rate, serial dependence on citations, impossibility of distinguishing between rate changes and serial dependence, evidence for changes in Poisson rate, and transitivity…

  8. Method for resonant measurement

    DOEpatents

    Rhodes, George W.; Migliori, Albert; Dixon, Raymond D.

    1996-01-01

    A method of measurement of objects to determine object flaws, Poisson's ratio (.sigma.) and shear modulus (.mu.) is shown and described. First, the frequency for expected degenerate responses is determined for one or more input frequencies and then splitting of degenerate resonant modes are observed to identify the presence of flaws in the object. Poisson's ratio and the shear modulus can be determined by identification of resonances dependent only on the shear modulus, and then using that shear modulus to find Poisson's ratio using other modes dependent on both the shear modulus and Poisson's ratio.

  9. Zero-inflated Conway-Maxwell Poisson Distribution to Analyze Discrete Data.

    PubMed

    Sim, Shin Zhu; Gupta, Ramesh C; Ong, Seng Huat

    2018-01-09

    In this paper, we study the zero-inflated Conway-Maxwell Poisson (ZICMP) distribution and develop a regression model. Score and likelihood ratio tests are also implemented for testing the inflation/deflation parameter. Simulation studies are carried out to examine the performance of these tests. A data example is presented to illustrate the concepts. In this example, the proposed model is compared to the well-known zero-inflated Poisson (ZIP) and the zero- inflated generalized Poisson (ZIGP) regression models. It is shown that the fit by ZICMP is comparable or better than these models.

  10. Applying the compound Poisson process model to the reporting of injury-related mortality rates.

    PubMed

    Kegler, Scott R

    2007-02-16

    Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.

  11. Comparison of the Nernst-Planck model and the Poisson-Boltzmann model for electroosmotic flows in microchannels.

    PubMed

    Park, H M; Lee, J S; Kim, T W

    2007-11-15

    In the analysis of electroosmotic flows, the internal electric potential is usually modeled by the Poisson-Boltzmann equation. The Poisson-Boltzmann equation is derived from the assumption of thermodynamic equilibrium where the ionic distributions are not affected by fluid flows. Although this is a reasonable assumption for steady electroosmotic flows through straight microchannels, there are some important cases where convective transport of ions has nontrivial effects. In these cases, it is necessary to adopt the Nernst-Planck equation instead of the Poisson-Boltzmann equation to model the internal electric field. In the present work, the predictions of the Nernst-Planck equation are compared with those of the Poisson-Boltzmann equation for electroosmotic flows in various microchannels where the convective transport of ions is not negligible.

  12. Efficiency optimization of a fast Poisson solver in beam dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula

    2016-01-01

    Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.

  13. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  14. A generalized right truncated bivariate Poisson regression model with applications to health data.

    PubMed

    Islam, M Ataharul; Chowdhury, Rafiqul I

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.

  15. A generalized right truncated bivariate Poisson regression model with applications to health data

    PubMed Central

    Islam, M. Ataharul; Chowdhury, Rafiqul I.

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model. PMID:28586344

  16. Persistently Auxetic Materials: Engineering the Poisson Ratio of 2D Self-Avoiding Membranes under Conditions of Non-Zero Anisotropic Strain.

    PubMed

    Ulissi, Zachary W; Govind Rajan, Ananth; Strano, Michael S

    2016-08-23

    Entropic surfaces represented by fluctuating two-dimensional (2D) membranes are predicted to have desirable mechanical properties when unstressed, including a negative Poisson's ratio ("auxetic" behavior). Herein, we present calculations of the strain-dependent Poisson ratio of self-avoiding 2D membranes demonstrating desirable auxetic properties over a range of mechanical strain. Finite-size membranes with unclamped boundary conditions have positive Poisson's ratio due to spontaneous non-zero mean curvature, which can be suppressed with an explicit bending rigidity in agreement with prior findings. Applying longitudinal strain along a singular axis to this system suppresses this mean curvature and the entropic out-of-plane fluctuations, resulting in a molecular-scale mechanism for realizing a negative Poisson's ratio above a critical strain, with values significantly more negative than the previously observed zero-strain limit for infinite sheets. We find that auxetic behavior persists over surprisingly high strains of more than 20% for the smallest surfaces, with desirable finite-size scaling producing surfaces with negative Poisson's ratio over a wide range of strains. These results promise the design of surfaces and composite materials with tunable Poisson's ratio by prestressing platelet inclusions or controlling the surface rigidity of a matrix of 2D materials.

  17. Progress on Discrete Fracture Network models with implications on the predictions of permeability and flow channeling structure

    NASA Astrophysics Data System (ADS)

    Darcel, C.; Davy, P.; Le Goc, R.; Maillot, J.; Selroos, J. O.

    2017-12-01

    We present progress on Discrete Fracture Network (DFN) flow modeling, including realistic advanced DFN spatial structures and local fracture transmissivity properties, through an application to the Forsmark site in Sweden. DFN models are a framework to combine fracture datasets from different sources and scales and to interpolate them in combining statistical distributions and stereological relations. The resulting DFN upscaling function - size density distribution - is a model component key to extrapolating fracture size densities between data gaps, from borehole core up to site scale. Another important feature of DFN models lays in the spatial correlations between fractures, with still unevaluated consequences on flow predictions. Indeed, although common Poisson (i.e. spatially random) models are widely used, they do not reflect these geological evidences for more complex structures. To model them, we define a DFN growth process from kinematic rules for nucleation, growth and stopping conditions. It mimics in a simplified way the geological fracturing processes and produces DFN characteristics -both upscaling function and spatial correlations- fully consistent with field observations. DFN structures are first compared for constant transmissivities. Flow simulations for the kinematic and equivalent Poisson DFN models show striking differences: with the kinematic DFN, connectivity and permeability are significantly smaller, down to a difference of one order of magnitude, and flow is much more channelized. Further flow analyses are performed with more realistic transmissivity distribution conditions (sealed parts, relations to fracture sizes, orientations and in-situ stress field). The relative importance of the overall DFN structure in the final flow predictions is discussed.

  18. A probabilistic tornado wind hazard model for the continental United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hossain, Q; Kimball, J; Mensing, R

    A probabilistic tornado wind hazard model for the continental United States (CONUS) is described. The model incorporates both aleatory (random) and epistemic uncertainties associated with quantifying the tornado wind hazard parameters. The temporal occurrences of tornadoes within the continental United States (CONUS) is assumed to be a Poisson process. A spatial distribution of tornado touchdown locations is developed empirically based on the observed historical events within the CONUS. The hazard model is an aerial probability model that takes into consideration the size and orientation of the facility, the length and width of the tornado damage area (idealized as a rectanglemore » and dependent on the tornado intensity scale), wind speed variation within the damage area, tornado intensity classification errors (i.e.,errors in assigning a Fujita intensity scale based on surveyed damage), and the tornado path direction. Epistemic uncertainties in describing the distributions of the aleatory variables are accounted for by using more than one distribution model to describe aleatory variations. The epistemic uncertainties are based on inputs from a panel of experts. A computer program, TORNADO, has been developed incorporating this model; features of this program are also presented.« less

  19. A simulation study of nonparametric total deviation index as a measure of agreement based on quantile regression.

    PubMed

    Lin, Lawrence; Pan, Yi; Hedayat, A S; Barnhart, Huiman X; Haber, Michael

    2016-01-01

    Total deviation index (TDI) captures a prespecified quantile of the absolute deviation of paired observations from raters, observers, methods, assays, instruments, etc. We compare the performance of TDI using nonparametric quantile regression to the TDI assuming normality (Lin, 2000). This simulation study considers three distributions: normal, Poisson, and uniform at quantile levels of 0.8 and 0.9 for cases with and without contamination. Study endpoints include the bias of TDI estimates (compared with their respective theoretical values), standard error of TDI estimates (compared with their true simulated standard errors), and test size (compared with 0.05), and power. Nonparametric TDI using quantile regression, although it slightly underestimates and delivers slightly less power for data without contamination, works satisfactorily under all simulated cases even for moderate (say, ≥40) sample sizes. The performance of the TDI based on a quantile of 0.8 is in general superior to that of 0.9. The performances of nonparametric and parametric TDI methods are compared with a real data example. Nonparametric TDI can be very useful when the underlying distribution on the difference is not normal, especially when it has a heavy tail.

  20. Predictive models of safety based on audit findings: Part 2: Measurement of model validity.

    PubMed

    Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor

    2013-07-01

    Part 1 of this study sequence developed a human factors/ergonomics (HF/E) based classification system (termed HFACS-MA) for safety audit findings and proved its measurement reliability. In Part 2, we used the human error categories of HFACS-MA as predictors of future safety performance. Audit records and monthly safety incident reports from two airlines submitted to their regulatory authority were available for analysis, covering over 6.5 years. Two participants derived consensus results of HF/E errors from the audit reports using HFACS-MA. We adopted Neural Network and Poisson regression methods to establish nonlinear and linear prediction models respectively. These models were tested for the validity of prediction of the safety data, and only Neural Network method resulted in substantially significant predictive ability for each airline. Alternative predictions from counting of audit findings and from time sequence of safety data produced some significant results, but of much smaller magnitude than HFACS-MA. The use of HF/E analysis of audit findings provided proactive predictors of future safety performance in the aviation maintenance field. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

Top