Sample records for density function pdf

  1. Probability density function approach for compressible turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.

    1994-01-01

    The objective of the present work is to extend the probability density function (PDF) tubulence model to compressible reacting flows. The proability density function of the species mass fractions and enthalpy are obtained by solving a PDF evolution equation using a Monte Carlo scheme. The PDF solution procedure is coupled with a compression finite-volume flow solver which provides the velocity and pressure fields. A modeled PDF equation for compressible flows, capable of treating flows with shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed. Two super sonic diffusion flames are studied using the proposed PDF model and the results are compared with experimental data; marked improvements over solutions without PDF are observed.

  2. Multifractal analysis with the probability density function at the three-dimensional anderson transition.

    PubMed

    Rodriguez, Alberto; Vasquez, Louella J; Römer, Rudolf A

    2009-03-13

    The probability density function (PDF) for critical wave function amplitudes is studied in the three-dimensional Anderson model. We present a formal expression between the PDF and the multifractal spectrum f(alpha) in which the role of finite-size corrections is properly analyzed. We show the non-Gaussian nature and the existence of a symmetry relation in the PDF. From the PDF, we extract information about f(alpha) at criticality such as the presence of negative fractal dimensions and the possible existence of termination points. A PDF-based multifractal analysis is shown to be a valid alternative to the standard approach based on the scaling of inverse participation ratios.

  3. On the probability distribution function of the mass surface density of molecular clouds. I

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-05-01

    The probability distribution function (PDF) of the mass surface density is an essential characteristic of the structure of molecular clouds or the interstellar medium in general. Observations of the PDF of molecular clouds indicate a composition of a broad distribution around the maximum and a decreasing tail at high mass surface densities. The first component is attributed to the random distribution of gas which is modeled using a log-normal function while the second component is attributed to condensed structures modeled using a simple power-law. The aim of this paper is to provide an analytical model of the PDF of condensed structures which can be used by observers to extract information about the condensations. The condensed structures are considered to be either spheres or cylinders with a truncated radial density profile at cloud radius rcl. The assumed profile is of the form ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 for arbitrary power n where ρc and r0 are the central density and the inner radius, respectively. An implicit function is obtained which either truncates (sphere) or has a pole (cylinder) at maximal mass surface density. The PDF of spherical condensations and the asymptotic PDF of cylinders in the limit of infinite overdensity ρc/ρ(rcl) flattens for steeper density profiles and has a power law asymptote at low and high mass surface densities and a well defined maximum. The power index of the asymptote Σ- γ of the logarithmic PDF (ΣP(Σ)) in the limit of high mass surface densities is given by γ = (n + 1)/(n - 1) - 1 (spheres) or by γ = n/ (n - 1) - 1 (cylinders in the limit of infinite overdensity). Appendices are available in electronic form at http://www.aanda.org

  4. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  5. PDF approach for compressible turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.

    1993-01-01

    The objective of the present work is to develop a probability density function (pdf) turbulence model for compressible reacting flows for use with a CFD flow solver. The probability density function of the species mass fraction and enthalpy are obtained by solving a pdf evolution equation using a Monte Carlo scheme. The pdf solution procedure is coupled with a compressible CFD flow solver which provides the velocity and pressure fields. A modeled pdf equation for compressible flows, capable of capturing shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed, and an averaging procedure is developed to provide smooth Monte-Carlo solutions to ensure convergence. Two supersonic diffusion flames are studied using the proposed pdf model and the results are compared with experimental data; marked improvements over CFD solutions without pdf are observed. Preliminary applications of pdf to 3D flows are also reported.

  6. On the probability distribution function of the mass surface density of molecular clouds. II.

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-11-01

    The probability distribution function (PDF) of the mass surface density of molecular clouds provides essential information about the structure of molecular cloud gas and condensed structures out of which stars may form. In general, the PDF shows two basic components: a broad distribution around the maximum with resemblance to a log-normal function, and a tail at high mass surface densities attributed to turbulence and self-gravity. In a previous paper, the PDF of condensed structures has been analyzed and an analytical formula presented based on a truncated radial density profile, ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 with central density ρc and inner radius r0, widely used in astrophysics as a generalization of physical density profiles. In this paper, the results are applied to analyze the PDF of self-gravitating, isothermal, pressurized, spherical (Bonnor-Ebert spheres) and cylindrical condensed structures with emphasis on the dependence of the PDF on the external pressure pext and on the overpressure q-1 = pc/pext, where pc is the central pressure. Apart from individual clouds, we also consider ensembles of spheres or cylinders, where effects caused by a variation of pressure ratio, a distribution of condensed cores within a turbulent gas, and (in case of cylinders) a distribution of inclination angles on the mean PDF are analyzed. The probability distribution of pressure ratios q-1 is assumed to be given by P(q-1) ∝ q-k1/ (1 + (q0/q)γ)(k1 + k2) /γ, where k1, γ, k2, and q0 are fixed parameters. The PDF of individual spheres with overpressures below ~100 is well represented by the PDF of a sphere with an analytical density profile with n = 3. At higher pressure ratios, the PDF at mass surface densities Σ ≪ Σ(0), where Σ(0) is the central mass surface density, asymptotically approaches the PDF of a sphere with n = 2. Consequently, the power-law asymptote at mass surface densities above the peak steepens from Psph(Σ) ∝ Σ-2 to Psph(Σ) ∝ Σ-3. The corresponding asymptote of the PDF of cylinders for the large q-1 is approximately given by Pcyl(Σ) ∝ Σ-4/3(1 - (Σ/Σ(0))2/3)-1/2. The distribution of overpressures q-1 produces a power-law asymptote at high mass surface densities given by ∝ Σ-2k2 - 1 (spheres) or ∝ Σ-2k2 (cylinders). Appendices are available in electronic form at http://www.aanda.org

  7. Probability density function formalism for optical coherence tomography signal analysis: a controlled phantom study.

    PubMed

    Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-06-15

    The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.

  8. Recent advances in PDF modeling of turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Leonard, Andrew D.; Dai, F.

    1995-01-01

    This viewgraph presentation concludes that a Monte Carlo probability density function (PDF) solution successfully couples with an existing finite volume code; PDF solution method applied to turbulent reacting flows shows good agreement with data; and PDF methods must be run on parallel machines for practical use.

  9. Simulations of Spray Reacting Flows in a Single Element LDI Injector With and Without Invoking an Eulerian Scalar PDF Method

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.

  10. Density probability distribution functions of diffuse gas in the Milky Way

    NASA Astrophysics Data System (ADS)

    Berkhuijsen, E. M.; Fletcher, A.

    2008-10-01

    In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of at high |b| is twice as wide as that at low |b|. The width of the PDF of the DIG is about 30 per cent smaller than that of the warm HI at the same latitudes. The results reported here provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.

  11. On the probability density function and characteristic function moments of image steganalysis in the log prediction error wavelet subband

    NASA Astrophysics Data System (ADS)

    Bao, Zhenkun; Li, Xiaolong; Luo, Xiangyang

    2017-01-01

    Extracting informative statistic features is the most essential technical issue of steganalysis. Among various steganalysis methods, probability density function (PDF) and characteristic function (CF) moments are two important types of features due to the excellent ability for distinguishing the cover images from the stego ones. The two types of features are quite similar in definition. The only difference is that the PDF moments are computed in the spatial domain, while the CF moments are computed in the Fourier-transformed domain. Then, the comparison between PDF and CF moments is an interesting question of steganalysis. Several theoretical results have been derived, and CF moments are proved better than PDF moments in some cases. However, in the log prediction error wavelet subband of wavelet decomposition, some experiments show that the result is opposite and lacks a rigorous explanation. To solve this problem, a comparison result based on the rigorous proof is presented: the first-order PDF moment is proved better than the CF moment, while the second-order CF moment is better than the PDF moment. It tries to open the theoretical discussion on steganalysis and the question of finding suitable statistical features.

  12. Probabilistic density function method for nonlinear dynamical systems driven by colored noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barajas-Solano, David A.; Tartakovsky, Alexandre M.

    2016-05-01

    We present a probability density function (PDF) method for a system of nonlinear stochastic ordinary differential equations driven by colored noise. The method provides an integro-differential equation for the temporal evolution of the joint PDF of the system's state, which we close by means of a modified Large-Eddy-Diffusivity-type closure. Additionally, we introduce the generalized local linearization (LL) approximation for deriving a computable PDF equation in the form of the second-order partial differential equation (PDE). We demonstrate the proposed closure and localization accurately describe the dynamics of the PDF in phase space for systems driven by noise with arbitrary auto-correlation time.more » We apply the proposed PDF method to the analysis of a set of Kramers equations driven by exponentially auto-correlated Gaussian colored noise to study the dynamics and stability of a power grid.« less

  13. Improvements and new features in the PDF module

    NASA Technical Reports Server (NTRS)

    Norris, Andrew T.

    1995-01-01

    This viewgraph presentation discusses what models are used in this package and what their advantages and disadvantages are, how the probability density function (PDF) model is implemented and the features of the program, and what can be expected in the future from the NASA Lewis PDF code.

  14. Modeling of turbulent supersonic H2-air combustion with a multivariate beta PDF

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Hassan, H. A.

    1993-01-01

    Recent calculations of turbulent supersonic reacting shear flows using an assumed multivariate beta PDF (probability density function) resulted in reduced production rates and a delay in the onset of combustion. This result is not consistent with available measurements. The present research explores two possible reasons for this behavior: use of PDF's that do not yield Favre averaged quantities, and the gradient diffusion assumption. A new multivariate beta PDF involving species densities is introduced which makes it possible to compute Favre averaged mass fractions. However, using this PDF did not improve comparisons with experiment. A countergradient diffusion model is then introduced. Preliminary calculations suggest this to be the cause of the discrepancy.

  15. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  16. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  17. A PDF closure model for compressible turbulent chemically reacting flows

    NASA Technical Reports Server (NTRS)

    Kollmann, W.

    1992-01-01

    The objective of the proposed research project was the analysis of single point closures based on probability density function (pdf) and characteristic functions and the development of a prediction method for the joint velocity-scalar pdf in turbulent reacting flows. Turbulent flows of boundary layer type and stagnation point flows with and without chemical reactions were be calculated as principal applications. Pdf methods for compressible reacting flows were developed and tested in comparison with available experimental data. The research work carried in this project was concentrated on the closure of pdf equations for incompressible and compressible turbulent flows with and without chemical reactions.

  18. Parameterizing deep convection using the assumed probability density function method

    DOE PAGES

    Storer, R. L.; Griffin, B. M.; Höft, J.; ...

    2014-06-11

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  19. Parameterizing deep convection using the assumed probability density function method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storer, R. L.; Griffin, B. M.; Höft, J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  20. Parameterizing deep convection using the assumed probability density function method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storer, R. L.; Griffin, B. M.; Hoft, Jan

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  1. Progress in the development of PDF turbulence models for combustion

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    A combined Monte Carlo-computational fluid dynamic (CFD) algorithm was developed recently at Lewis Research Center (LeRC) for turbulent reacting flows. In this algorithm, conventional CFD schemes are employed to obtain the velocity field and other velocity related turbulent quantities, and a Monte Carlo scheme is used to solve the evolution equation for the probability density function (pdf) of species mass fraction and temperature. In combustion computations, the predictions of chemical reaction rates (the source terms in the species conservation equation) are poor if conventional turbulence modles are used. The main difficulty lies in the fact that the reaction rate is highly nonlinear, and the use of averaged temperature produces excessively large errors. Moment closure models for the source terms have attained only limited success. The probability density function (pdf) method seems to be the only alternative at the present time that uses local instantaneous values of the temperature, density, etc., in predicting chemical reaction rates, and thus may be the only viable approach for more accurate turbulent combustion calculations. Assumed pdf's are useful in simple problems; however, for more general combustion problems, the solution of an evolution equation for the pdf is necessary.

  2. Probabilistic density function method for nonlinear dynamical systems driven by colored noise.

    PubMed

    Barajas-Solano, David A; Tartakovsky, Alexandre M

    2016-05-01

    We present a probability density function (PDF) method for a system of nonlinear stochastic ordinary differential equations driven by colored noise. The method provides an integrodifferential equation for the temporal evolution of the joint PDF of the system's state, which we close by means of a modified large-eddy-diffusivity (LED) closure. In contrast to the classical LED closure, the proposed closure accounts for advective transport of the PDF in the approximate temporal deconvolution of the integrodifferential equation. In addition, we introduce the generalized local linearization approximation for deriving a computable PDF equation in the form of a second-order partial differential equation. We demonstrate that the proposed closure and localization accurately describe the dynamics of the PDF in phase space for systems driven by noise with arbitrary autocorrelation time. We apply the proposed PDF method to analyze a set of Kramers equations driven by exponentially autocorrelated Gaussian colored noise to study nonlinear oscillators and the dynamics and stability of a power grid. Numerical experiments show the PDF method is accurate when the noise autocorrelation time is either much shorter or longer than the system's relaxation time, while the accuracy decreases as the ratio of the two timescales approaches unity. Similarly, the PDF method accuracy decreases with increasing standard deviation of the noise.

  3. Properties of the probability density function of the non-central chi-squared distribution

    NASA Astrophysics Data System (ADS)

    András, Szilárd; Baricz, Árpád

    2008-10-01

    In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.

  4. Numerical solutions of the complete Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1993-01-01

    The objective of this study is to compare the use of assumed pdf (probability density function) approaches for modeling supersonic turbulent reacting flowfields with the more elaborate approach where the pdf evolution equation is solved. Assumed pdf approaches for averaging the chemical source terms require modest increases in CPU time typically of the order of 20 percent above treating the source terms as 'laminar.' However, it is difficult to assume a form for these pdf's a priori that correctly mimics the behavior of the actual pdf governing the flow. Solving the evolution equation for the pdf is a theoretically sound approach, but because of the large dimensionality of this function, its solution requires a Monte Carlo method which is computationally expensive and slow to coverage. Preliminary results show both pdf approaches to yield similar solutions for the mean flow variables.

  5. The H I-to-H2 Transition in a Turbulent Medium

    NASA Astrophysics Data System (ADS)

    Bialy, Shmuel; Burkhart, Blakesley; Sternberg, Amiel

    2017-07-01

    We study the effect of density fluctuations induced by turbulence on the H I/H2 structure in photodissociation regions (PDRs) both analytically and numerically. We perform magnetohydrodynamic numerical simulations for both subsonic and supersonic turbulent gas and chemical H I/H2 balance calculations. We derive atomic-to-molecular density profiles and the H I column density probability density function (PDF) assuming chemical equilibrium. We find that, while the H I/H2 density profiles are strongly perturbed in turbulent gas, the mean H I column density is well approximated by the uniform-density analytic formula of Sternberg et al. The PDF width depends on (a) the radiation intensity-to-mean density ratio, (b) the sonic Mach number, and (c) the turbulence decorrelation scale, or driving scale. We derive an analytic model for the H I PDF and demonstrate how our model, combined with 21 cm observations, can be used to constrain the Mach number and driving scale of turbulent gas. As an example, we apply our model to observations of H I in the Perseus molecular cloud. We show that a narrow observed H I PDF may imply small-scale decorrelation, pointing to the potential importance of subcloud-scale turbulence driving.

  6. A computer simulated phantom study of tomotherapy dose optimization based on probability density functions (PDF) and potential errors caused by low reproducibility of PDF.

    PubMed

    Sheng, Ke; Cai, Jing; Brookeman, James; Molloy, Janelle; Christopher, John; Read, Paul

    2006-09-01

    Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF was calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.

  7. A computer simulated phantom study of tomotherapy dose optimization based on probability density functions (PDF) and potential errors caused by low reproducibility of PDF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Ke; Cai Jing; Brookeman, James

    2006-09-15

    Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF wasmore » calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF. The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.« less

  8. Probability density functions of power-in-bucket and power-in-fiber for an infrared laser beam propagating in the maritime environment.

    PubMed

    Nelson, Charles; Avramov-Zamurovic, Svetlana; Korotkova, Olga; Malek-Madani, Reza; Sova, Raymond; Davidson, Frederic

    2013-11-01

    Irradiance fluctuations of an infrared laser beam from a shore-to-ship data link ranging from 5.1 to 17.8 km are compared to lognormal (LN), gamma-gamma (GG) with aperture averaging, and gamma-Laguerre (GL) distributions. From our data analysis, the LN and GG probability density function (PDF) models were generally in good agreement in near-weak to moderate fluctuations. This was also true in moderate to strong fluctuations when the spatial coherence radius was smaller than the detector aperture size, with the exception of the 2.54 cm power-in-bucket (PIB) where the LN PDF model fit best. For moderate to strong fluctuations, the GG PDF model tended to outperform the LN PDF model when the spatial coherence radius was greater than the detector aperture size. Additionally, the GL PDF model had the best or next to best overall fit in all cases with the exception of the 2.54 cm PIB where the scintillation index was highest. The GL PDF model also appears to be robust for off-of-beam center laser beam applications.

  9. The pdf approach to turbulent flow

    NASA Technical Reports Server (NTRS)

    Kollmann, W.

    1990-01-01

    This paper provides a detailed discussion of the theory and application of probability density function (pdf) methods, which provide a complete statistical description of turbulent flow fields at a single point or a finite number of points. The basic laws governing the flow of Newtonian fluids are set up in the Eulerian and the Lagrangian frame, and the exact and linear equations for the characteristic functionals in those frames are discussed. Pdf equations in both frames are derived as Fourier transforms of the equations of the characteristic functions. Possible formulations for the nonclosed terms in the pdf equation are discussed, their properties are assessed, and closure modes for the molecular-transport and the fluctuating pressure-gradient terms are reviewed. The application of pdf methods to turbulent combustion flows, supersonic flows, and the interaction of turbulence with shock waves is discussed.

  10. Calculations of the flow properties of a confined diffusion flame

    NASA Technical Reports Server (NTRS)

    Kim, Yongmo; Chung, T. J.; Sohn, Jeong L.

    1989-01-01

    A finite element algorithm for the computation of confined, axisymmetric, turbulent diffusion flames is developed. The mean mixture properties were obtained by three methods based on diffusion flame concept: without using a probability density function (PDF), with a double-delta PDF, and with a beta PDF. A comparison is made for the combustion models, and the effect of turbulence on combustion are discussed.

  11. The shapes of column density PDFs. The importance of the last closed contour

    NASA Astrophysics Data System (ADS)

    Alves, João; Lombardi, Marco; Lada, Charles J.

    2017-10-01

    The probability distribution function of column density (PDF) has become the tool of choice for cloud structure analysis and star formation studies. Its simplicity is attractive, and the PDF could offer access to cloud physical parameters otherwise difficult to measure, but there has been some confusion in the literature on the definition of its completeness limit and shape at the low column density end. In this letter we use the natural definition of the completeness limit of a column density PDF, the last closed column density contour inside a surveyed region, and apply it to a set of large-scale maps of nearby molecular clouds. We conclude that there is no observational evidence for log-normal PDFs in these objects. We find that all studied molecular clouds have PDFs well described by power laws, including the diffuse cloud Polaris. Our results call for a new physical interpretation of the shape of the column density PDFs. We find that the slope of a cloud PDF is invariant to distance but not to the spatial arrangement of cloud material, and as such it is still a useful tool for investigating cloud structure.

  12. The H i-to-H{sub 2} Transition in a Turbulent Medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bialy, Shmuel; Sternberg, Amiel; Burkhart, Blakesley, E-mail: shmuelbi@mail.tau.ac.il

    2017-07-10

    We study the effect of density fluctuations induced by turbulence on the H i/H{sub 2} structure in photodissociation regions (PDRs) both analytically and numerically. We perform magnetohydrodynamic numerical simulations for both subsonic and supersonic turbulent gas and chemical H i/H{sub 2} balance calculations. We derive atomic-to-molecular density profiles and the H i column density probability density function (PDF) assuming chemical equilibrium. We find that, while the H i/H{sub 2} density profiles are strongly perturbed in turbulent gas, the mean H i column density is well approximated by the uniform-density analytic formula of Sternberg et al. The PDF width depends onmore » (a) the radiation intensity–to–mean density ratio, (b) the sonic Mach number, and (c) the turbulence decorrelation scale, or driving scale. We derive an analytic model for the H i PDF and demonstrate how our model, combined with 21 cm observations, can be used to constrain the Mach number and driving scale of turbulent gas. As an example, we apply our model to observations of H i in the Perseus molecular cloud. We show that a narrow observed H i PDF may imply small-scale decorrelation, pointing to the potential importance of subcloud-scale turbulence driving.« less

  13. Modeling of turbulent supersonic H2-air combustion with an improved joint beta PDF

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Hassan, H. A.

    1991-01-01

    Attempts at modeling recent experiments of Cheng et al. indicated that discrepancies between theory and experiment can be a result of the form of assumed probability density function (PDF) and/or the turbulence model employed. Improvements in both the form of the assumed PDF and the turbulence model are presented. The results are again used to compare with measurements. Initial comparisons are encouraging.

  14. 2MASS wide-field extinction maps. V. Corona Australis

    NASA Astrophysics Data System (ADS)

    Alves, João; Lombardi, Marco; Lada, Charles J.

    2014-05-01

    We present a near-infrared extinction map of a large region (~870 deg2) covering the isolated Corona Australis complex of molecular clouds. We reach a 1-σ error of 0.02 mag in the K-band extinction with a resolution of 3 arcmin over the entire map. We find that the Corona Australis cloud is about three times as large as revealed by previous CO and dust emission surveys. The cloud consists of a 45 pc long complex of filamentary structure from the well known star forming Western-end (the head, N ≥ 1023 cm-2) to the diffuse Eastern-end (the tail, N ≤ 1021 cm-2). Remarkably, about two thirds of the complex both in size and mass lie beneath AV ~ 1 mag. We find that the probability density function (PDF) of the cloud cannot be described by a single log-normal function. Similar to prior studies, we found a significant excess at high column densities, but a log-normal + power-law tail fit does not work well at low column densities. We show that at low column densities near the peak of the observed PDF, both the amplitude and shape of the PDF are dominated by noise in the extinction measurements making it impractical to derive the intrinsic cloud PDF below AK < 0.15 mag. Above AK ~ 0.15 mag, essentially the molecular component of the cloud, the PDF appears to be best described by a power-law with index -3, but could also described as the tail of a broad and relatively low amplitude, log-normal PDF that peaks at very low column densities. FITS files of the extinction maps are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/565/A18

  15. The PDF method for turbulent combustion

    NASA Technical Reports Server (NTRS)

    Pope, S. B.

    1991-01-01

    Probability Density Function (PDF) methods provide a means of calculating the properties of turbulent reacting flows. They have been successfully applied to many turbulent flames, including some with finite rate kinetic effects. Here the methods are reviewed with an emphasis on computational issues and their application to turbulent combustion.

  16. Conservational PDF Equations of Turbulence

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2010-01-01

    Recently we have revisited the traditional probability density function (PDF) equations for the velocity and species in turbulent incompressible flows. They are all unclosed due to the appearance of various conditional means which are modeled empirically. However, we have observed that it is possible to establish a closed velocity PDF equation and a closed joint velocity and species PDF equation through conditions derived from the integral form of the Navier-Stokes equations. Although, in theory, the resulted PDF equations are neither general nor unique, they nevertheless lead to the exact transport equations for the first moment as well as all higher order moments. We refer these PDF equations as the conservational PDF equations. This observation is worth further exploration for its validity and CFD application

  17. Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows

    NASA Astrophysics Data System (ADS)

    Minier, Jean-Pierre; Profeta, Christophe

    2015-11-01

    This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Zp=(xp,Up) and is represented by its PDF p (t ;yp,Vp) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Zp=(xp,Up,Us) , and, consequently, handles an extended PDF p (t ;yp,Vp,Vs) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to describe physical systems and guidelines are formulated to emphasize the key role played by the notion of slow and fast variables.

  18. Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.

    PubMed

    Joshi, Niranjan; Kadir, Timor; Brady, Michael

    2011-08-01

    Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.

  19. STAR FORMATION IN TURBULENT MOLECULAR CLOUDS WITH COLLIDING FLOW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsumoto, Tomoaki; Dobashi, Kazuhito; Shimoikura, Tomomi, E-mail: matsu@hosei.ac.jp

    2015-03-10

    Using self-gravitational hydrodynamical numerical simulations, we investigated the evolution of high-density turbulent molecular clouds swept by a colliding flow. The interaction of shock waves due to turbulence produces networks of thin filamentary clouds with a sub-parsec width. The colliding flow accumulates the filamentary clouds into a sheet cloud and promotes active star formation for initially high-density clouds. Clouds with a colliding flow exhibit a finer filamentary network than clouds without a colliding flow. The probability distribution functions (PDFs) for the density and column density can be fitted by lognormal functions for clouds without colliding flow. When the initial turbulence ismore » weak, the column density PDF has a power-law wing at high column densities. The colliding flow considerably deforms the PDF, such that the PDF exhibits a double peak. The stellar mass distributions reproduced here are consistent with the classical initial mass function with a power-law index of –1.35 when the initial clouds have a high density. The distribution of stellar velocities agrees with the gas velocity distribution, which can be fitted by Gaussian functions for clouds without colliding flow. For clouds with colliding flow, the velocity dispersion of gas tends to be larger than the stellar velocity dispersion. The signatures of colliding flows and turbulence appear in channel maps reconstructed from the simulation data. Clouds without colliding flow exhibit a cloud-scale velocity shear due to the turbulence. In contrast, clouds with colliding flow show a prominent anti-correlated distribution of thin filaments between the different velocity channels, suggesting collisions between the filamentary clouds.« less

  20. Volatility in financial markets: stochastic models and empirical results

    NASA Astrophysics Data System (ADS)

    Miccichè, Salvatore; Bonanno, Giovanni; Lillo, Fabrizio; Mantegna, Rosario N.

    2002-11-01

    We investigate the historical volatility of the 100 most capitalized stocks traded in US equity markets. An empirical probability density function (pdf) of volatility is obtained and compared with the theoretical predictions of a lognormal model and of the Hull and White model. The lognormal model well describes the pdf in the region of low values of volatility whereas the Hull and White model better approximates the empirical pdf for large values of volatility. Both models fail in describing the empirical pdf over a moderately large volatility range.

  1. Comments on PDF methods

    NASA Technical Reports Server (NTRS)

    Chen, J.-Y.

    1992-01-01

    Viewgraphs are presented on the following topics: the grand challenge of combustion engineering; research of probability density function (PDF) methods at Sandia; experiments of turbulent jet flames (Masri and Dibble, 1988); departures from chemical equilibrium; modeling turbulent reacting flows; superequilibrium OH radical; pdf modeling of turbulent jet flames; scatter plot for CH4 (methane) and O2 (oxygen); methanol turbulent jet flames; comparisons between predictions and experimental data; and turbulent C2H4 jet flames.

  2. Vertical overlap of probability density functions of cloud and precipitation hydrometeors: CLOUD AND PRECIPITATION PDF OVERLAP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovchinnikov, Mikhail; Lim, Kyo-Sun Sunny; Larson, Vincent E.

    Coarse-resolution climate models increasingly rely on probability density functions (PDFs) to represent subgrid-scale variability of prognostic variables. While PDFs characterize the horizontal variability, a separate treatment is needed to account for the vertical structure of clouds and precipitation. When sub-columns are drawn from these PDFs for microphysics or radiation parameterizations, appropriate vertical correlations must be enforced via PDF overlap specifications. This study evaluates the representation of PDF overlap in the Subgrid Importance Latin Hypercube Sampler (SILHS) employed in the assumed PDF turbulence and cloud scheme called the Cloud Layers Unified By Binormals (CLUBB). PDF overlap in CLUBB-SILHS simulations of continentalmore » and tropical oceanic deep convection is compared with overlap of PDF of various microphysics variables in cloud-resolving model (CRM) simulations of the same cases that explicitly predict the 3D structure of cloud and precipitation fields. CRM results show that PDF overlap varies significantly between different hydrometeor types, as well as between PDFs of mass and number mixing ratios for each species, - a distinction that the current SILHS implementation does not make. In CRM simulations that explicitly resolve cloud and precipitation structures, faster falling species, such as rain and graupel, exhibit significantly higher coherence in their vertical distributions than slow falling cloud liquid and ice. These results suggest that to improve the overlap treatment in the sub-column generator, the PDF correlations need to depend on hydrometeor properties, such as fall speeds, in addition to the currently implemented dependency on the turbulent convective length scale.« less

  3. Probability Density Functions of Observed Rainfall in Montana

    NASA Technical Reports Server (NTRS)

    Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.

    1995-01-01

    The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.

  4. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction.

    PubMed

    Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E

    2015-01-07

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners-the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [(11)C]AFM rats imaged on the HRRT and [(11)C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.

  5. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction

    PubMed Central

    Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E

    2016-01-01

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners - the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods. PMID:25490063

  6. An actuarial approach to retrofit savings in buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subbarao, Krishnappa; Etingov, Pavel V.; Reddy, T. A.

    An actuarial method has been developed for determining energy savings from retrofits from energy use data for a number of buildings. This method should be contrasted with the traditional method of using pre- and post-retrofit data on the same building. This method supports the U.S. Department of Energy Building Performance Database of real building performance data and related tools that enable engineering and financial practitioners to evaluate retrofits. The actuarial approach derives, from the database, probability density functions (PDFs) for energy savings from retrofits by creating peer groups for the user’s pre post buildings. From the energy use distribution ofmore » the two groups, the savings PDF is derived. This provides the basis for engineering analysis as well as financial risk analysis leading to investment decisions. Several technical issues are addressed: The savings PDF is obtained from the pre- and post-PDF through a convolution. Smoothing using kernel density estimation is applied to make the PDF more realistic. The low data density problem can be mitigated through a neighborhood methodology. Correlations between pre and post buildings are addressed to improve the savings PDF. Sample size effects are addressed through the Kolmogorov--Smirnov tests and quantile-quantile plots.« less

  7. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  8. First photon detection in time-resolved transillumination imaging: a theoretical evaluation.

    PubMed

    Behin-Ain, S; van Doorn, T; Patterson, J R

    2004-09-07

    First photon detection, as a special case of time-resolved transillumination imaging, is studied through the derivation of the temporal probability density function (pdf) for the first arriving photon. The pdf for different laser intensities, media and second and later arriving photons were generated. The arrival time of the first detected photon reduced as the laser power increased and also when the scattering and absorption coefficients decreased. The pdf for an imbedded totally absorbing 3 mm inhomogeneity may be distinguished from the pdf of a homogeneous turbid medium similar to that of human breast in dimensions and optical properties.

  9. Combined PDF and Rietveld studies of ADORable zeolites and the disordered intermediate IPC-1P.

    PubMed

    Morris, Samuel A; Wheatley, Paul S; Položij, Miroslav; Nachtigall, Petr; Eliášová, Pavla; Čejka, Jiří; Lucas, Tim C; Hriljac, Joseph A; Pinar, Ana B; Morris, Russell E

    2016-09-28

    The disordered intermediate of the ADORable zeolite UTL has been structurally confirmed using the pair distribution function (PDF) technique. The intermediate, IPC-1P, is a disordered layered compound formed by the hydrolysis of UTL in 0.1 M hydrochloric acid solution. Its structure is unsolvable by traditional X-ray diffraction techniques. The PDF technique was first benchmarked against high-quality synchrotron Rietveld refinements of IPC-2 (OKO) and IPC-4 (PCR) - two end products of IPC-1P condensation that share very similar structural features. An IPC-1P starting model derived from density functional theory was used for the PDF refinement, which yielded a final fit of Rw = 18% and a geometrically reasonable structure. This confirms the layers do stay intact throughout the ADOR process and shows PDF is a viable technique for layered zeolite structure determination.

  10. Cosmological constraints from the convergence 1-point probability distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, Kenneth; Blazek, Jonathan; Honscheid, Klaus

    2017-06-29

    Here, we examine the cosmological information available from the 1-point probability density function (PDF) of the weak-lensing convergence field, utilizing fast l-picola simulations and a Fisher analysis. We find competitive constraints in the Ωm–σ8 plane from the convergence PDF with 188 arcmin 2 pixels compared to the cosmic shear power spectrum with an equivalent number of modes (ℓ < 886). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is lessmore » susceptible, and improves the total figure of merit by a factor of 2–3, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.« less

  11. Cosmological constraints from the convergence 1-point probability distribution

    NASA Astrophysics Data System (ADS)

    Patton, Kenneth; Blazek, Jonathan; Honscheid, Klaus; Huff, Eric; Melchior, Peter; Ross, Ashley J.; Suchyta, Eric

    2017-11-01

    We examine the cosmological information available from the 1-point probability density function (PDF) of the weak-lensing convergence field, utilizing fast L-PICOLA simulations and a Fisher analysis. We find competitive constraints in the Ωm-σ8 plane from the convergence PDF with 188 arcmin2 pixels compared to the cosmic shear power spectrum with an equivalent number of modes (ℓ < 886). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is less susceptible, and improves the total figure of merit by a factor of 2-3, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.

  12. Comments on the present state and future directions of PDF methods

    NASA Technical Reports Server (NTRS)

    Obrien, E. E.

    1992-01-01

    The one point probability density function (PDF) method is examined in light of its use in actual engineering problems. The PDF method, although relatively complicated, appears to be the only format available to handle the nonlinear stochastic difficulties caused by typical reaction kinetics. Turbulence modeling, if it is to play a central role in combustion modeling, has to be integrated with the chemistry in a way which produces accurate numerical solutions to combustion problems. It is questionable whether the development of turbulent models in isolation from the peculiar statistics of reactant concentrations is a fruitful line of development as far as propulsion is concerned. There are three issues for which additional viewgraphs are prepared: the one point pdf method; the amplitude mapping closure; and a hybrid strategy for replacing a full two point pdf treatment of reacting flows by a single point pdf and correlation functions. An appeal is made for the establishment of an adequate data base for compressible flow with reactions for Mach numbers of unity or higher.

  13. Pressure algorithm for elliptic flow calculations with the PDF method

    NASA Technical Reports Server (NTRS)

    Anand, M. S.; Pope, S. B.; Mongia, H. C.

    1991-01-01

    An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.

  14. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    DOE PAGES

    Clerkin, L.; Kirk, D.; Manera, M.; ...

    2016-08-30

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (kappa_WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the Counts in Cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey (DES) Science Verification data over 139 deg^2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirmmore » that the galaxy density contrast distribution is well modeled by a lognormal PDF convolved with Poisson noise at angular scales from 10-40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as kappa_WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the kappa_WL distribution is well modeled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fit chi^2/DOF of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07 respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.« less

  15. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    NASA Astrophysics Data System (ADS)

    Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.; Amara, A.; Bacon, D.; Chang, C.; Gaztañaga, E.; Hawken, A.; Jain, B.; Joachimi, B.; Vikram, V.; Abbott, T.; Allam, S.; Armstrong, R.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Carrasco Kind, M.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lima, M.; Melchior, P.; Miquel, R.; Nord, B.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Walker, A. R.

    2017-04-01

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (κWL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as κWL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the κWL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting χ2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.

  16. Under-sampling trajectory design for compressed sensing based DCE-MRI.

    PubMed

    Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting

    2013-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.

  17. A compound scattering pdf for the ultrasonic echo envelope and its relationship to K and Nakagami distributions.

    PubMed

    Shankar, P Mohana

    2003-03-01

    A compound probability density function (pdf) is presented to describe the envelope of the backscattered echo from tissue. This pdf allows local and global variation in scattering cross sections in tissue. The ultrasonic backscattering cross sections are assumed to be gamma distributed. The gamma distribution also is used to model the randomness in the average cross sections. This gamma-gamma model results in the compound scattering pdf for the envelope. The relationship of this compound pdf to the Rayleigh, K, and Nakagami distributions is explored through an analysis of the signal-to-noise ratio of the envelopes and random number simulations. The three parameter compound pdf appears to be flexible enough to represent envelope statistics giving rise to Rayleigh, K, and Nakagami distributions.

  18. A multi-scalar PDF approach for LES of turbulent spray combustion

    NASA Astrophysics Data System (ADS)

    Raman, Venkat; Heye, Colin

    2011-11-01

    A comprehensive joint-scalar probability density function (PDF) approach is proposed for large eddy simulation (LES) of turbulent spray combustion and tests are conducted to analyze the validity and modeling requirements. The PDF method has the advantage that the chemical source term appears closed but requires models for the small scale mixing process. A stable and consistent numerical algorithm for the LES/PDF approach is presented. To understand the modeling issues in the PDF method, direct numerical simulation of a spray flame at three different fuel droplet Stokes numbers and an equivalent gaseous flame are carried out. Assumptions in closing the subfilter conditional diffusion term in the filtered PDF transport equation are evaluated for various model forms. In addition, the validity of evaporation rate models in high Stokes number flows is analyzed.

  19. The study of PDF turbulence models in combustion

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    The accurate prediction of turbulent combustion is still beyond reach for today's computation techniques. It is the consensus of the combustion profession that the predictions of chemically reacting flow were poor if conventional turbulence models were used. The main difficulty lies in the fact that the reaction rate is highly nonlinear, and the use of averaged temperature, pressure, and density produces excessively large errors. The probability density function (PDF) method is the only alternative at the present time that uses local instant values of the temperature, density, etc. in predicting chemical reaction rate, and thus it is the only viable approach for turbulent combustion calculations.

  20. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  1. Non-Gaussian PDF Modeling of Turbulent Boundary Layer Fluctuating Pressure Excitation

    NASA Technical Reports Server (NTRS)

    Steinwolf, Alexander; Rizzi, Stephen A.

    2003-01-01

    The purpose of the study is to investigate properties of the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the exterior of a supersonic transport aircraft. It is shown that fluctuating pressure PDFs differ from the Gaussian distribution even for surface conditions having no significant discontinuities. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations upstream of forward-facing step discontinuities and downstream of aft-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. Various analytical PDF distributions are used and further developed to model this behavior.

  2. PDF approach for turbulent scalar field: Some recent developments

    NASA Technical Reports Server (NTRS)

    Gao, Feng

    1993-01-01

    The probability density function (PDF) method has been proven a very useful approach in turbulence research. It has been particularly effective in simulating turbulent reacting flows and in studying some detailed statistical properties generated by a turbulent field There are, however, some important questions that have yet to be answered in PDF studies. Our efforts in the past year have been focused on two areas. First, a simple mixing model suitable for Monte Carlo simulations has been developed based on the mapping closure. Secondly, the mechanism of turbulent transport has been analyzed in order to understand the recently observed abnormal PDF's of turbulent temperature fields generated by linear heat sources.

  3. A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.

    2016-10-01

    Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.

  4. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less

  5. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew

    2009-03-01

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  6. Large deviation principle at work: Computation of the statistical properties of the exact one-point aperture mass

    NASA Astrophysics Data System (ADS)

    Reimberg, Paulo; Bernardeau, Francis

    2018-01-01

    We present a formalism based on the large deviation principle (LDP) applied to cosmological density fields, and more specifically to the arbitrary functional of density profiles, and we apply it to the derivation of the cumulant generating function and one-point probability distribution function (PDF) of the aperture mass (Map ), a common observable for cosmic shear observations. We show that the LDP can indeed be used in practice for a much larger family of observables than previously envisioned, such as those built from continuous and nonlinear functionals of density profiles. Taking advantage of this formalism, we can extend previous results, which were based on crude definitions of the aperture mass, with top-hat windows and the use of the reduced shear approximation (replacing the reduced shear with the shear itself). We were precisely able to quantify how this latter approximation affects the Map statistical properties. In particular, we derive the corrective term for the skewness of the Map and reconstruct its one-point PDF.

  7. Probability density of aperture-averaged irradiance fluctuations for long range free space optical communication links.

    PubMed

    Lyke, Stephen D; Voelz, David G; Roggemann, Michael C

    2009-11-20

    The probability density function (PDF) of aperture-averaged irradiance fluctuations is calculated from wave-optics simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to strong. Results show that under weak scintillation conditions both the gamma-gamma and lognormal PDF models provide a good fit to the simulation data for all aperture sizes studied. Our results indicate that in moderate scintillation the gamma-gamma PDF provides a better fit to the simulation data than the lognormal PDF for all aperture sizes studied. In the strong scintillation regime, the simulation data distribution is gamma gamma for aperture sizes much smaller than the coherence radius rho0 and lognormal for aperture sizes on the order of rho0 and larger. Examples of how these results affect the bit-error rate of an on-off keyed free space optical communication link are presented.

  8. Statistics of Sxy estimates

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Pawka, S. S.

    1987-01-01

    The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.

  9. Convolutionless Nakajima-Zwanzig equations for stochastic analysis in nonlinear dynamical systems.

    PubMed

    Venturi, D; Karniadakis, G E

    2014-06-08

    Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima-Zwanzig-Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection-reaction problems.

  10. On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models. [probability density function

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1992-01-01

    Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.

  11. Convolutionless Nakajima–Zwanzig equations for stochastic analysis in nonlinear dynamical systems

    PubMed Central

    Venturi, D.; Karniadakis, G. E.

    2014-01-01

    Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima–Zwanzig–Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection–reaction problems. PMID:24910519

  12. PDF methods for combustion in high-speed turbulent flows

    NASA Technical Reports Server (NTRS)

    Pope, Stephen B.

    1995-01-01

    This report describes the research performed during the second year of this three-year project. The ultimate objective of the project is extend the applicability of probability density function (pdf) methods from incompressible to compressible turbulent reactive flows. As described in subsequent sections, progress has been made on: (1) formulation and modelling of pdf equations for compressible turbulence, in both homogeneous and inhomogeneous inert flows; and (2) implementation of the compressible model in various flow configurations, namely decaying isotropic turbulence, homogeneous shear flow and plane mixing layer.

  13. Ionization compression impact on dense gas distribution and star formation. Probability density functions around H II regions as seen by Herschel

    NASA Astrophysics Data System (ADS)

    Tremblin, P.; Schneider, N.; Minier, V.; Didelon, P.; Hill, T.; Anderson, L. D.; Motte, F.; Zavagno, A.; André, Ph.; Arzoumanian, D.; Audit, E.; Benedettini, M.; Bontemps, S.; Csengeri, T.; Di Francesco, J.; Giannini, T.; Hennemann, M.; Nguyen Luong, Q.; Marston, A. P.; Peretto, N.; Rivera-Ingraham, A.; Russeil, D.; Rygl, K. L. J.; Spinoglio, L.; White, G. J.

    2014-04-01

    Aims: Ionization feedback should impact the probability distribution function (PDF) of the column density of cold dust around the ionized gas. We aim to quantify this effect and discuss its potential link to the core and initial mass function (CMF/IMF). Methods: We used Herschel column density maps of several regions observed within the HOBYS key program in a systematic way: M 16, the Rosette and Vela C molecular clouds, and the RCW 120 H ii region. We computed the PDFs in concentric disks around the main ionizing sources, determined their properties, and discuss the effect of ionization pressure on the distribution of the column density. Results: We fitted the column density PDFs of all clouds with two lognormal distributions, since they present a "double-peak" or an enlarged shape in the PDF. Our interpretation is that the lowest part of the column density distribution describes the turbulent molecular gas, while the second peak corresponds to a compression zone induced by the expansion of the ionized gas into the turbulent molecular cloud. Such a double peak is not visible for all clouds associated with ionization fronts, but it depends on the relative importance of ionization pressure and turbulent ram pressure. A power-law tail is present for higher column densities, which are generally ascribed to the effect of gravity. The condensations at the edge of the ionized gas have a steep compressed radial profile, sometimes recognizable in the flattening of the power-law tail. This could lead to an unambiguous criterion that is able to disentangle triggered star formation from pre-existing star formation. Conclusions: In the context of the gravo-turbulent scenario for the origin of the CMF/IMF, the double-peaked or enlarged shape of the PDF may affect the formation of objects at both the low-mass and the high-mass ends of the CMF/IMF. In particular, a broader PDF is required by the gravo-turbulent scenario to fit the IMF properly with a reasonable initial Mach number for the molecular cloud. Since other physical processes (e.g., the equation of state and the variations among the core properties) have already been said to broaden the PDF, the relative importance of the different effects remains an open question. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  14. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.

    PubMed

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2014-12-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  15. Exact PDF equations and closure approximations for advective-reactive transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venturi, D.; Tartakovsky, Daniel M.; Tartakovsky, Alexandre M.

    2013-06-01

    Mathematical models of advection–reaction phenomena rely on advective flow velocity and (bio) chemical reaction rates that are notoriously random. By using functional integral methods, we derive exact evolution equations for the probability density function (PDF) of the state variables of the advection–reaction system in the presence of random transport velocity and random reaction rates with rather arbitrary distributions. These PDF equations are solved analytically for transport with deterministic flow velocity and a linear reaction rate represented mathematically by a heterog eneous and strongly-correlated random field. Our analytical solution is then used to investigate the accuracy and robustness of the recentlymore » proposed large-eddy diffusivity (LED) closure approximation [1]. We find that the solution to the LED-based PDF equation, which is exact for uncorrelated reaction rates, is accurate even in the presence of strong correlations and it provides an upper bound of predictive uncertainty.« less

  16. Joint constraints on galaxy bias and σ{sub 8} through the N-pdf of the galaxy number density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnalte-Mur, Pablo; Martínez, Vicent J.; Vielva, Patricio

    We present a full description of the N-probability density function of the galaxy number density fluctuations. This N-pdf is given in terms, on the one hand, of the cold dark matter correlations and, on the other hand, of the galaxy bias parameter. The method relies on the assumption commonly adopted that the dark matter density fluctuations follow a local non-linear transformation of the initial energy density perturbations. The N-pdf of the galaxy number density fluctuations allows for an optimal estimation of the bias parameter (e.g., via maximum-likelihood estimation, or Bayesian inference if there exists any a priori information on themore » bias parameter), and of those parameters defining the dark matter correlations, in particular its amplitude (σ{sub 8}). It also provides the proper framework to perform model selection between two competitive hypotheses. The parameters estimation capabilities of the N-pdf are proved by SDSS-like simulations (both, ideal log-normal simulations and mocks obtained from Las Damas simulations), showing that our estimator is unbiased. We apply our formalism to the 7th release of the SDSS main sample (for a volume-limited subset with absolute magnitudes M{sub r} ≤ −20). We obtain b-circumflex  = 1.193 ± 0.074 and σ-bar{sub 8} = 0.862 ± 0.080, for galaxy number density fluctuations in cells of the size of 30h{sup −1}Mpc. Different model selection criteria show that galaxy biasing is clearly favoured.« less

  17. Investigations of turbulent scalar fields using probability density function approach

    NASA Technical Reports Server (NTRS)

    Gao, Feng

    1991-01-01

    Scalar fields undergoing random advection have attracted much attention from researchers in both the theoretical and practical sectors. Research interest spans from the study of the small scale structures of turbulent scalar fields to the modeling and simulations of turbulent reacting flows. The probability density function (PDF) method is an effective tool in the study of turbulent scalar fields, especially for those which involve chemical reactions. It has been argued that a one-point, joint PDF approach is the one to choose from among many simulation and closure methods for turbulent combustion and chemically reacting flows based on its practical feasibility in the foreseeable future for multiple reactants. Instead of the multi-point PDF, the joint PDF of a scalar and its gradient which represents the roles of both scalar and scalar diffusion is introduced. A proper closure model for the molecular diffusion term in the PDF equation is investigated. Another direction in this research is to study the mapping closure method that has been recently proposed to deal with the PDF's in turbulent fields. This method seems to have captured the physics correctly when applied to diffusion problems. However, if the turbulent stretching is included, the amplitude mapping has to be supplemented by either adjusting the parameters representing turbulent stretching at each time step or by introducing the coordinate mapping. This technique is still under development and seems to be quite promising. The final objective of this project is to understand some fundamental properties of the turbulent scalar fields and to develop practical numerical schemes that are capable of handling turbulent reacting flows.

  18. Characteristic Structure of Star-forming Clouds

    NASA Astrophysics Data System (ADS)

    Myers, Philip C.

    2015-06-01

    This paper presents a new method to diagnose the star-forming potential of a molecular cloud region from the probability density function of its column density (N-pdf). This method provides expressions for the column density and mass profiles of a symmetric filament having the same N-pdf as a filamentary region. The central concentration of this characteristic filament can distinguish regions and can quantify their fertility for star formation. Profiles are calculated for N-pdfs which are pure lognormal, pure power law, or a combination. In relation to models of singular polytropic cylinders, characteristic filaments can be unbound, bound, or collapsing depending on their central concentration. Such filamentary models of the dynamical state of N-pdf gas are more relevant to star-forming regions than are spherical collapse models. The star formation fertility of a bound or collapsing filament is quantified by its mean mass accretion rate when in radial free fall. For a given mass per length, the fertility increases with the filament mean column density and with its initial concentration. In selected regions the fertility of their characteristic filaments increases with the level of star formation.

  19. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  20. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  1. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE PAGES

    Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...

    2018-01-31

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  2. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  3. A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain

    DTIC Science & Technology

    2015-05-18

    approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a

  4. Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion

    DTIC Science & Technology

    2016-07-20

    PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical...combustion process may be as large as factor of seven, including variable- density effects in PDF methods is of significance. Conventionally, the...strategy of modelling variable density flows in PDF methods is similar to that used for second-moment closure models (SMCM): models are developed based on

  5. Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Toker, Cenk; Çenet, Duygu

    2016-07-01

    Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent GNSS Network) network. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.

  6. Stochastic-field cavitation model

    NASA Astrophysics Data System (ADS)

    Dumond, J.; Magagnato, F.; Class, A.

    2013-07-01

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  7. A cavitation model based on Eulerian stochastic fields

    NASA Astrophysics Data System (ADS)

    Magagnato, F.; Dumond, J.

    2013-12-01

    Non-linear phenomena can often be described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and in particular to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. Firstly, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  8. Stochastic-field cavitation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumond, J., E-mail: julien.dumond@areva.com; AREVA GmbH, Erlangen, Paul-Gossen-Strasse 100, D-91052 Erlangen; Magagnato, F.

    2013-07-15

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-fieldmore » cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.« less

  9. Quantum diffusion during inflation and primordial black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pattison, Chris; Assadullahi, Hooshyar; Wands, David

    We calculate the full probability density function (PDF) of inflationary curvature perturbations, even in the presence of large quantum backreaction. Making use of the stochastic-δ N formalism, two complementary methods are developed, one based on solving an ordinary differential equation for the characteristic function of the PDF, and the other based on solving a heat equation for the PDF directly. In the classical limit where quantum diffusion is small, we develop an expansion scheme that not only recovers the standard Gaussian PDF at leading order, but also allows us to calculate the first non-Gaussian corrections to the usual result. Inmore » the opposite limit where quantum diffusion is large, we find that the PDF is given by an elliptic theta function, which is fully characterised by the ratio between the squared width and height (in Planck mass units) of the region where stochastic effects dominate. We then apply these results to the calculation of the mass fraction of primordial black holes from inflation, and show that no more than ∼ 1 e -fold can be spent in regions of the potential dominated by quantum diffusion. We explain how this requirement constrains inflationary potentials with two examples.« less

  10. Quantum diffusion during inflation and primordial black holes

    NASA Astrophysics Data System (ADS)

    Pattison, Chris; Vennin, Vincent; Assadullahi, Hooshyar; Wands, David

    2017-10-01

    We calculate the full probability density function (PDF) of inflationary curvature perturbations, even in the presence of large quantum backreaction. Making use of the stochastic-δ N formalism, two complementary methods are developed, one based on solving an ordinary differential equation for the characteristic function of the PDF, and the other based on solving a heat equation for the PDF directly. In the classical limit where quantum diffusion is small, we develop an expansion scheme that not only recovers the standard Gaussian PDF at leading order, but also allows us to calculate the first non-Gaussian corrections to the usual result. In the opposite limit where quantum diffusion is large, we find that the PDF is given by an elliptic theta function, which is fully characterised by the ratio between the squared width and height (in Planck mass units) of the region where stochastic effects dominate. We then apply these results to the calculation of the mass fraction of primordial black holes from inflation, and show that no more than ~ 1 e-fold can be spent in regions of the potential dominated by quantum diffusion. We explain how this requirement constrains inflationary potentials with two examples.

  11. Broad distribution spectrum from Gaussian to power law appears in stochastic variations in RNA-seq data.

    PubMed

    Awazu, Akinori; Tanabe, Takahiro; Kamitani, Mari; Tezuka, Ayumi; Nagano, Atsushi J

    2018-05-29

    Gene expression levels exhibit stochastic variations among genetically identical organisms under the same environmental conditions. In many recent transcriptome analyses based on RNA sequencing (RNA-seq), variations in gene expression levels among replicates were assumed to follow a negative binomial distribution, although the physiological basis of this assumption remains unclear. In this study, RNA-seq data were obtained from Arabidopsis thaliana under eight conditions (21-27 replicates), and the characteristics of gene-dependent empirical probability density function (ePDF) profiles of gene expression levels were analyzed. For A. thaliana and Saccharomyces cerevisiae, various types of ePDF of gene expression levels were obtained that were classified as Gaussian, power law-like containing a long tail, or intermediate. These ePDF profiles were well fitted with a Gauss-power mixing distribution function derived from a simple model of a stochastic transcriptional network containing a feedback loop. The fitting function suggested that gene expression levels with long-tailed ePDFs would be strongly influenced by feedback regulation. Furthermore, the features of gene expression levels are correlated with their functions, with the levels of essential genes tending to follow a Gaussian-like ePDF while those of genes encoding nucleic acid-binding proteins and transcription factors exhibit long-tailed ePDF.

  12. Monte Carlo PDF method for turbulent reacting flow in a jet-stirred reactor

    NASA Astrophysics Data System (ADS)

    Roekaerts, D.

    1992-01-01

    A stochastic algorithm for the solution of the modeled scalar probability density function (PDF) transport equation for single-phase turbulent reacting flow is described. Cylindrical symmetry is assumed. The PDF is represented by ensembles of N representative values of the thermochemical variables in each cell of a nonuniform finite-difference grid and operations on these elements representing convection, diffusion, mixing and reaction are derived. A simplified model and solution algorithm which neglects the influence of turbulent fluctuations on mean reaction rates is also described. Both algorithms are applied to a selectivity problem in a real reactor.

  13. Scale matters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolin, L. G.

    The applicability of Navier–Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman–Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. Finally, I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics.

  14. Scale matters

    DOE PAGES

    Margolin, L. G.

    2018-03-19

    The applicability of Navier–Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman–Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. Finally, I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics.

  15. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  16. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  17. Large eddy simulation of turbulent premixed combustion using tabulated detailed chemistry and presumed probability density function

    NASA Astrophysics Data System (ADS)

    Zhang, Hongda; Han, Chao; Ye, Taohong; Ren, Zhuyin

    2016-03-01

    A method of chemistry tabulation combined with presumed probability density function (PDF) is applied to simulate piloted premixed jet burner flames with high Karlovitz number using large eddy simulation. Thermo-chemistry states are tabulated by the combination of auto-ignition and extended auto-ignition model. To evaluate the predictive capability of the proposed tabulation method to represent the thermo-chemistry states under the condition of different fresh gases temperature, a-priori study is conducted by performing idealised transient one-dimensional premixed flame simulations. Presumed PDF is used to involve the interaction of turbulence and flame with beta PDF to model the reaction progress variable distribution. Two presumed PDF models, Dirichlet distribution and independent beta distribution, respectively, are applied for representing the interaction between two mixture fractions that are associated with three inlet streams. Comparisons of statistical results show that two presumed PDF models for the two mixture fractions are both capable of predicting temperature and major species profiles, however, they are shown to have a significant effect on the predictions for intermediate species. An analysis of the thermo-chemical state-space representation of the sub-grid scale (SGS) combustion model is performed by comparing correlations between the carbon monoxide mass fraction and temperature. The SGS combustion model based on the proposed chemistry tabulation can reasonably capture the peak value and change trend of intermediate species. Aspects regarding model extensions to adequately predict the peak location of intermediate species are discussed.

  18. On convergence of differential evolution over a class of continuous functions with unique global optimum.

    PubMed

    Ghosh, Sayan; Das, Swagatam; Vasilakos, Athanasios V; Suresh, Kaushik

    2012-02-01

    Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. Since its inception in the mid 1990s, DE has been finding many successful applications in real-world optimization problems from diverse domains of science and engineering. This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm. It first deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms. Then, by applying the concepts of Lyapunov stability theorems, it shows that as time approaches infinity, the PDF of the trial solutions concentrates narrowly around the global optimum of the objective function, assuming the shape of a Dirac delta distribution. Asymptotic convergence behavior of the population PDF is established by constructing a Lyapunov functional based on the PDF and showing that it monotonically decreases with time. The analysis is applicable to a class of continuous and real-valued objective functions that possesses a unique global optimum (but may have multiple local optima). Theoretical results have been substantiated with relevant computer simulations.

  19. Compressible cavitation with stochastic field method

    NASA Astrophysics Data System (ADS)

    Class, Andreas; Dumond, Julien

    2012-11-01

    Non-linear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrange particles or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic field method solving pdf transport based on Euler fields has been proposed which eliminates the necessity to mix Euler and Lagrange techniques or prescribed pdf assumptions. In the present work, part of the PhD Design and analysis of a Passive Outflow Reducer relying on cavitation, a first application of the stochastic field method to multi-phase flow and in particular to cavitating flow is presented. The application considered is a nozzle subjected to high velocity flow so that sheet cavitation is observed near the nozzle surface in the divergent section. It is demonstrated that the stochastic field formulation captures the wide range of pdf shapes present at different locations. The method is compatible with finite-volume codes where all existing physical models available for Lagrange techniques, presumed pdf or binning methods can be easily extended to the stochastic field formulation.

  20. EUPDF: An Eulerian-Based Monte Carlo Probability Density Function (PDF) Solver. User's Manual

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    EUPDF is an Eulerian-based Monte Carlo PDF solver developed for application with sprays, combustion, parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with the coding required to couple the PDF code to any given flow code and a basic understanding of the EUPDF code structure as well as the models involved in the PDF formulation. The source code of EUPDF will be available with the release of the National Combustion Code (NCC) as a complete package.

  1. Implementation of time-efficient adaptive sampling function design for improved undersampled MRI reconstruction

    NASA Astrophysics Data System (ADS)

    Choi, Jinhyeok; Kim, Hyeonjin

    2016-12-01

    To improve the efficacy of undersampled MRI, a method of designing adaptive sampling functions is proposed that is simple to implement on an MR scanner and yet effectively improves the performance of the sampling functions. An approximation of the energy distribution of an image (E-map) is estimated from highly undersampled k-space data acquired in a prescan and efficiently recycled in the main scan. An adaptive probability density function (PDF) is generated by combining the E-map with a modeled PDF. A set of candidate sampling functions are then prepared from the adaptive PDF, among which the one with maximum energy is selected as the final sampling function. To validate its computational efficiency, the proposed method was implemented on an MR scanner, and its robust performance in Fourier-transform (FT) MRI and compressed sensing (CS) MRI was tested by simulations and in a cherry tomato. The proposed method consistently outperforms the conventional modeled PDF approach for undersampling ratios of 0.2 or higher in both FT-MRI and CS-MRI. To fully benefit from undersampled MRI, it is preferable that the design of adaptive sampling functions be performed online immediately before the main scan. In this way, the proposed method may further improve the efficacy of the undersampled MRI.

  2. Anisotropy of stress correlation in two-dimensional liquids and a pseudospin model

    DOE PAGES

    Wu, Bin; Iwashita, Takuya; Egami, Takeshi

    2015-11-04

    Liquids are condensed matter in which atoms are strongly correlated in position and momentum. The atomic pair density function (PDF) is used often in describing such correlation. However, elucidation of many properties requires higher degrees of correlation than the pair correlation. For instance, viscosity depends upon the stress correlations in space and time. We examine the cross correlation between the stress correlation at the atomic level and the PDF for two-dimensional liquids. We introduce the concept of the stress-resolved pair distribution function (SRPDF) that uses the sign of atomic-level stress as a selection rule to include particles from density correlations.more » The connection between SRPDFs and stress correlation function is explained through an approximation in which the shear stress is replaced by a pseudospin. Lastly, we further assess the possibility of interpreting the long-range stress correlation as a consequence of short-range Ising-like pseudospin interactions.« less

  3. The large-scale gravitational bias from the quasi-linear regime.

    NASA Astrophysics Data System (ADS)

    Bernardeau, F.

    1996-08-01

    It is known that in gravitational instability scenarios the nonlinear dynamics induces non-Gaussian features in cosmological density fields that can be investigated with perturbation theory. Here, I derive the expression of the joint moments of cosmological density fields taken at two different locations. The results are valid when the density fields are filtered with a top-hat filter window function, and when the distance between the two cells is large compared to the smoothing length. In particular I show that it is possible to get the generating function of the coefficients C_p,q_ defined by <δ^p^({vec}(x)_1_)δ^q^({vec}(x)_2_)>_c_=C_p,q_ <δ^2^({vec}(x))>^p+q-2^ <δ({vec}(x)_1_)δ({vec}(x)_2_)> where δ({vec}(x)) is the local smoothed density field. It is then possible to reconstruct the joint density probability distribution function (PDF), generalizing for two points what has been obtained previously for the one-point density PDF. I discuss the validity of the large separation approximation in an explicit numerical Monte Carlo integration of the C_2,1_ parameter as a function of |{vec}(x)_1_-{vec}(x)_2_|. A straightforward application is the calculation of the large-scale ``bias'' properties of the over-dense (or under-dense) regions. The properties and the shape of the bias function are presented in details and successfully compared with numerical results obtained in an N-body simulation with CDM initial conditions.

  4. Stochastic Forecasting of Algae Blooms in Lakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Tartakovsky, Daniel M.; Tartakovsky, Alexandre M.

    We consider the development of harmful algae blooms (HABs) in a lake with uncertain nutrients inflow. Two general frameworks, Fokker-Planck equation and the PDF methods, are developed to quantify the resultant concentration uncertainty of various algae groups, via deriving a deterministic equation of their joint probability density function (PDF). A computational example is examined to study the evolution of cyanobacteria (the blue-green algae) and the impacts of initial concentration and inflow-outflow ratio.

  5. Evaluation of a Consistent LES/PDF Method Using a Series of Experimental Spray Flames

    NASA Astrophysics Data System (ADS)

    Heye, Colin; Raman, Venkat

    2012-11-01

    A consistent method for the evolution of the joint-scalar probability density function (PDF) transport equation is proposed for application to large eddy simulation (LES) of turbulent reacting flows containing evaporating spray droplets. PDF transport equations provide the benefit of including the chemical source term in closed form, however, additional terms describing LES subfilter mixing must be modeled. The recent availability of detailed experimental measurements provide model validation data for a wide range of evaporation rates and combustion regimes, as is well-known to occur in spray flames. In this work, the experimental data will used to investigate the impact of droplet mass loading and evaporation rates on the subfilter scalar PDF shape in comparison with conventional flamelet models. In addition, existing model term closures in the PDF transport equations are evaluated with a focus on their validity in the presence of regime changes.

  6. Density PDFs of diffuse gas in the Milky Way

    NASA Astrophysics Data System (ADS)

    Berkhuijsen, E. M.; Fletcher, A.

    2012-09-01

    The probability distribution functions (PDFs) of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5∘ and |b|≥ 5∘ are considered separately. Our results provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.

  7. Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.

    PubMed

    Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir

    2018-04-01

    In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.

  8. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  9. Development of a standard soil-to-skin adherence probability density function for use in Monte Carlo analyses of dermal exposure.

    PubMed

    Finley, B L; Scott, P K; Mayhall, D A

    1994-08-01

    It has recently been suggested that "standard" data distributions for key exposure variables should be developed wherever appropriate for use in probabilistic or "Monte Carlo" exposure analyses. Soil-on-skin adherence estimates represent an ideal candidate for development of a standard data distribution: There are several readily available studies which offer a consistent pattern of reported results, and more importantly, soil adherence to skin is likely to vary little from site-to-site. In this paper, we thoroughly review each of the published soil adherence studies with respect to study design, sampling, and analytical methods, and level of confidence in the reported results. Based on these studies, probability density functions (PDF) of soil adherence values were examined for different age groups and different sampling techniques. The soil adherence PDF developed from adult data was found to resemble closely the soil adherence PDF based on child data in terms of both central tendency (mean = 0.49 and 0.63 mg-soil/cm2-skin, respectively) and 95th percentile values (1.6 and 2.4 mg-soil/cm2-skin, respectively). Accordingly, a single, "standard" PDF is presented based on all data collected for all age groups. This standard PDF is lognormally distributed; the arithmetic mean and standard deviation are 0.52 +/- 0.9 mg-soil/cm2-skin. Since our review of the literature indicates that soil adherence under environmental conditions will be minimally influenced by age, sex, soil type, or particle size, this PDF should be considered applicable to all settings. The 50th and 95th percentile values of the standard PDF (0.25 and 1.7 mg-soil/cm2-skin, respectively) are very similar to recent U.S. EPA estimates of "average" and "upper-bound" soil adherence (0.2 and 1.0 mg-soil/cm2-skin, respectively).

  10. Non-Gaussianity in a quasiclassical electronic circuit

    NASA Astrophysics Data System (ADS)

    Suzuki, Takafumi J.; Hayakawa, Hisao

    2017-05-01

    We study the non-Gaussian dynamics of a quasiclassical electronic circuit coupled to a mesoscopic conductor. Non-Gaussian noise accompanying the nonequilibrium transport through the conductor significantly modifies the stationary probability density function (PDF) of the flux in the dissipative circuit. We incorporate weak quantum fluctuation of the dissipative LC circuit with a stochastic method and evaluate the quantum correction of the stationary PDF. Furthermore, an inverse formula to infer the statistical properties of the non-Gaussian noise from the stationary PDF is derived in the classical-quantum crossover regime. The quantum correction is indispensable to correctly estimate the microscopic transfer events in the QPC with the quasiclassical inverse formula.

  11. A study of hydrogen diffusion flames using PDF turbulence model

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    The application of probability density function (pdf) turbulence models is addressed. For the purpose of accurate prediction of turbulent combustion, an algorithm that combines a conventional computational fluid dynamic (CFD) flow solver with the Monte Carlo simulation of the pdf evolution equation was developed. The algorithm was validated using experimental data for a heated turbulent plane jet. The study of H2-F2 diffusion flames was carried out using this algorithm. Numerical results compared favorably with experimental data. The computations show that the flame center shifts as the equivalence ratio changes, and that for the same equivalence ratio, similarity solutions for flames exist.

  12. A study of hydrogen diffusion flames using PDF turbulence model

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    The application of probability density function (pdf) turbulence models is addressed in this work. For the purpose of accurate prediction of turbulent combustion, an algorithm that combines a conventional CFD flow solver with the Monte Carlo simulation of the pdf evolution equation has been developed. The algorithm has been validated using experimental data for a heated turbulent plane jet. The study of H2-F2 diffusion flames has been carried out using this algorithm. Numerical results compared favorably with experimental data. The computuations show that the flame center shifts as the equivalence ratio changes, and that for the same equivalence ratio, similarity solutions for flames exist.

  13. Scale matters

    NASA Astrophysics Data System (ADS)

    Margolin, L. G.

    2018-04-01

    The applicability of Navier-Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman-Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics. This article is part of the theme issue `Hilbert's sixth problem'.

  14. The Superstatistical Nature and Interoccurrence Time of Atmospheric Mercury Concentration Fluctuations

    EPA Science Inventory

    The probability density function (PDF) of the time intervals between subsequent extreme events in atmospheric Hg0 concentration data series from different latitudes has been investigated. The Hg0 dynamic possesses a long-term memory autocorrelation function. Above a fixed thresh...

  15. Continuous time anomalous diffusion in a composite medium.

    PubMed

    Stickler, B A; Schachinger, E

    2011-08-01

    The one-dimensional continuous time anomalous diffusion in composite media consisting of a finite number of layers in immediate contact is investigated. The diffusion process itself is described with the help of two probability density functions (PDFs), one of which is an arbitrary jump-length PDF, and the other is a long-tailed waiting-time PDF characterized by the waiting-time index β∈(0,1). The former is assumed to be a function of the space coordinate x and the time coordinate t while the latter is a function of x and the time interval. For such an environment a very general form of the diffusion equation is derived which describes the continuous time anomalous diffusion in a composite medium. This result is then specialized to two particular forms of the jump-length PDF, namely the continuous time Lévy flight PDF and the continuous time truncated Lévy flight PDF. In both cases the PDFs are characterized by the Lévy index α∈(0,2) which is regarded to be a function of x and t. It is possible to demonstrate that for particular choices of the indices α and β other equations for anomalous diffusion, well known from the literature, follow immediately. This demonstrates the very general applicability of the derivation and of the resulting fractional differential equation discussed here.

  16. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    NASA Astrophysics Data System (ADS)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  17. Relationship between sea ice freeboard and draft in the Arctic Basin, and implications for ice thickness monitoring

    NASA Technical Reports Server (NTRS)

    Wadhams, P.; Tucker, W. B., III; Krabill, W. B.; Swift, R. N.; Comiso, J. C.; Davis, N. R.

    1992-01-01

    This study confirms the finding of Comiso et al. (1991) that the probability density function (pdf) of the ice freeboard in the Arctic Ocean can be converted to a pdf of ice draft by applying a simple coordinate factor. The coordinate factor, R, which is the ratio of mean draft to mean freeboard pdf is related to the mean material (ice plus snow) density, rho(m), and the near-surface water density rho(w) by the relationship R = rho(m)/(rho(w) - rho(m)). The measured value of R was applied to each of six 50-km sections north of Greenland of a joint airborne laser and submarine sonar profile obtained along nearly coincident tracks from the Arctic Basin north of Greenland and was found to be consistent over all sections tested, despite differences in the ice regime. This indicates that a single value of R might be used for measurements done in this season of the year. The mean value R from all six sections was found to be 7.89.

  18. Two is better than one: joint statistics of density and velocity in concentric spheres as a cosmological probe

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Hahn, O.; Pichon, C.; Bernardeau, F.

    2017-08-01

    The analytical formalism to obtain the probability distribution functions (PDFs) of spherically averaged cosmic densities and velocity divergences in the mildly non-linear regime is presented. A large-deviation principle is applied to those cosmic fields assuming their most likely dynamics in spheres is set by the spherical collapse model. We validate our analytical results using state-of-the-art dark matter simulations with a phase-space resolved velocity field finding a 2 per cent level agreement for a wide range of velocity divergences and densities in the mildly non-linear regime (˜10 Mpc h-1 at redshift zero), usually inaccessible to perturbation theory. From the joint PDF of densities and velocity divergences measured in two concentric spheres, we extract with the same accuracy velocity profiles and conditional velocity PDF subject to a given over/underdensity that are of interest to understand the non-linear evolution of velocity flows. Both PDFs are used to build a simple but accurate maximum likelihood estimator for the redshift evolution of the variance of both the density and velocity divergence fields, which have smaller relative errors than their sample variances when non-linearities appear. Given the dependence of the velocity divergence on the growth rate, there is a significant gain in using the full knowledge of both PDFs to derive constraints on the equation of state-of-dark energy. Thanks to the insensitivity of the velocity divergence to bias, its PDF can be used to obtain unbiased constraints on the growth of structures (σ8, f) or it can be combined with the galaxy density PDF to extract bias parameters.

  19. IGM CONSTRAINTS FROM THE SDSS-III/BOSS DR9 Lyα FOREST TRANSMISSION PROBABILITY DISTRIBUTION FUNCTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Khee-Gan; Hennawi, Joseph F.; Spergel, David N.

    2015-02-01

    The Lyα forest transmission probability distribution function (PDF) is an established probe of the intergalactic medium (IGM) astrophysics, especially the temperature-density relationship of the IGM. We measure the transmission PDF from 3393 Baryon Oscillations Spectroscopic Survey (BOSS) quasars from Sloan Digital Sky Survey Data Release 9, and compare with mock spectra that include careful modeling of the noise, continuum, and astrophysical uncertainties. The BOSS transmission PDFs, measured at (z) = [2.3, 2.6, 3.0], are compared with PDFs created from mock spectra drawn from a suite of hydrodynamical simulations that sample the IGM temperature-density relationship, γ, and temperature at mean density,more » T {sub 0}, where T(Δ) = T {sub 0}Δ{sup γ} {sup –} {sup 1}. We find that a significant population of partial Lyman-limit systems (LLSs) with a column-density distribution slope of β{sub pLLS} ∼ – 2 are required to explain the data at the low-transmission end of transmission PDF, while uncertainties in the mean Lyα forest transmission affect the high-transmission end. After modeling the LLSs and marginalizing over mean transmission uncertainties, we find that γ = 1.6 best describes the data over our entire redshift range, although constraints on T {sub 0} are affected by systematic uncertainties. Within our model framework, isothermal or inverted temperature-density relationships (γ ≤ 1) are disfavored at a significance of over 4σ, although this could be somewhat weakened by cosmological and astrophysical uncertainties that we did not model.« less

  20. Chemically reacting supersonic flow calculation using an assumed PDF model

    NASA Technical Reports Server (NTRS)

    Farshchi, M.

    1990-01-01

    This work is motivated by the need to develop accurate models for chemically reacting compressible turbulent flow fields that are present in a typical supersonic combustion ramjet (SCRAMJET) engine. In this paper the development of a new assumed probability density function (PDF) reaction model for supersonic turbulent diffusion flames and its implementation into an efficient Navier-Stokes solver are discussed. The application of this model to a supersonic hydrogen-air flame will be considered.

  1. Evolution of column density distributions within Orion A⋆

    NASA Astrophysics Data System (ADS)

    Stutz, A. M.; Kainulainen, J.

    2015-05-01

    We compare the structure of star-forming molecular clouds in different regions of Orion A to determine how the column density probability distribution function (N-PDF) varies with environmental conditions such as the fraction of young protostars. A correlation between the N-PDF slope and Class 0 protostar fraction has been previously observed in a low-mass star-formation region (Perseus); here we test whether a similar correlation is observed in a high-mass star-forming region. We used Herschel PACS and SPIRE cold dust emission observations to derive a column density map of Orion A. We used the Herschel Orion Protostar Survey catalog to accurately identify and classify the Orion A young stellar object content, including the cold and relatively short-lived Class 0 protostars (with a lifetime of ~0.14 Myr). We divided Orion A into eight independent regions of 0.25 square degrees (13.5 pc2); in each region we fit the N-PDF distribution with a power law, and we measured the fraction of Class 0 protostars. We used a maximum-likelihood method to measure the N-PDF power-law index without binning the column density data. We find that the Class 0 fraction is higher in regions with flatter column density distributions. We tested the effects of incompleteness, extinction-driven misclassification of Class 0 sources, resolution, and adopted pixel-scales. We show that these effects cannot account for the observed trend. Our observations demonstrate an association between the slope of the power-law N-PDF and the Class 0 fractions within Orion A. Various interpretations are discussed, including timescales based on the Class 0 protostar fraction assuming a constant star-formation rate. The observed relation suggests that the N-PDF can be related to an evolutionary state of the gas. If universal, such a relation permits evaluating the evolutionary state from the N-PDF power-law index at much greater distances than those accessible with protostar counts. Appendices are available in electronic form at http://www.aanda.orgThe N(H) map as a FITS file is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/577/L6

  2. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population.

    PubMed

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  3. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  4. Dynamic heterogeneity and conditional statistics of non-Gaussian temperature fluctuations in turbulent thermal convection

    NASA Astrophysics Data System (ADS)

    He, Xiaozhou; Wang, Yin; Tong, Penger

    2018-05-01

    Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.

  5. Modeling of turbulent chemical reaction

    NASA Technical Reports Server (NTRS)

    Chen, J.-Y.

    1995-01-01

    Viewgraphs are presented on modeling turbulent reacting flows, regimes of turbulent combustion, regimes of premixed and regimes of non-premixed turbulent combustion, chemical closure models, flamelet model, conditional moment closure (CMC), NO(x) emissions from turbulent H2 jet flames, probability density function (PDF), departures from chemical equilibrium, mixing models for PDF methods, comparison of predicted and measured H2O mass fractions in turbulent nonpremixed jet flames, experimental evidence of preferential diffusion in turbulent jet flames, and computation of turbulent reacting flows.

  6. Numerical simulation of turbulent combustion: Scientific challenges

    NASA Astrophysics Data System (ADS)

    Ren, ZhuYin; Lu, Zhen; Hou, LingYun; Lu, LiuYan

    2014-08-01

    Predictive simulation of engine combustion is key to understanding the underlying complicated physicochemical processes, improving engine performance, and reducing pollutant emissions. Critical issues as turbulence modeling, turbulence-chemistry interaction, and accommodation of detailed chemical kinetics in complex flows remain challenging and essential for high-fidelity combustion simulation. This paper reviews the current status of the state-of-the-art large eddy simulation (LES)/prob-ability density function (PDF)/detailed chemistry approach that can address the three challenging modelling issues. PDF as a subgrid model for LES is formulated and the hybrid mesh-particle method for LES/PDF simulations is described. Then the development need in micro-mixing models for the PDF simulations of turbulent premixed combustion is identified. Finally the different acceleration methods for detailed chemistry are reviewed and a combined strategy is proposed for further development.

  7. PDF investigations of turbulent non-premixed jet flames with thin reaction zones

    NASA Astrophysics Data System (ADS)

    Wang, Haifeng; Pope, Stephen

    2012-11-01

    PDF (probability density function) modeling studies are carried out for the Sydney piloted jet flames. These Sydney flames feature much thinner reaction zones in the mixture fraction space compared to those in the well-studied Sandia piloted jet flames. The performance of the different turbulent combustion models in the Sydney flames with thin reaction zones has not been examined extensively before, and this work aims at evaluating the capability of the PDF method to represent the thin turbulent flame structures in the Sydney piloted flames. Parametric and sensitivity PDF studies are performed with respect to the different models and model parameters. A global error parameter is defined to quantify the departure of the simulation results from the experimental data, and is used to assess the performance of the different set of models and model parameters.

  8. Weak limit of the three-state quantum walk on the line

    NASA Astrophysics Data System (ADS)

    Falkner, Stefan; Boettcher, Stefan

    2014-07-01

    We revisit the one-dimensional discrete time quantum walk with three states and the Grover coin, the simplest model that exhibits localization in a quantum walk. We derive analytic expressions for the localization and a long-time approximation for the entire probability density function (PDF). We find the possibility for asymmetric localization to the extreme that it vanishes completely on one site of the initial conditions. We also connect the time-averaged approximation of the PDF found by Inui et al. [Phys. Rev. E 72, 056112 (2005), 10.1103/PhysRevE.72.056112] to a spatial average of the walk. We show that this smoothed approximation predicts moments of the real PDF accurately.

  9. Multiresolution MAP despeckling of SAR images based on locally adaptive generalized Gaussian pdf modeling.

    PubMed

    Argenti, Fabrizio; Bianchi, Tiziano; Alparone, Luciano

    2006-11-01

    In this paper, a new despeckling method based on undecimated wavelet decomposition and maximum a posteriori MIAP) estimation is proposed. Such a method relies on the assumption that the probability density function (pdf) of each wavelet coefficient is generalized Gaussian (GG). The major novelty of the proposed approach is that the parameters of the GG pdf are taken to be space-varying within each wavelet frame. Thus, they may be adjusted to spatial image context, not only to scale and orientation. Since the MAP equation to be solved is a function of the parameters of the assumed pdf model, the variance and shape factor of the GG function are derived from the theoretical moments, which depend on the moments and joint moments of the observed noisy signal and on the statistics of speckle. The solution of the MAP equation yields the MAP estimate of the wavelet coefficients of the noise-free image. The restored SAR image is synthesized from such coefficients. Experimental results, carried out on both synthetic speckled images and true SAR images, demonstrate that MAP filtering can be successfully applied to SAR images represented in the shift-invariant wavelet domain, without resorting to a logarithmic transformation.

  10. Materials Data on PdF3 (SG:167) by Materials Project

    DOE Data Explorer

    Kristin Persson

    2015-03-07

    Computed materials data using density functional theory calculations. These calculations determine the electronic structure of bulk materials by solving approximations to the Schrodinger equation. For more information, see https://materialsproject.org/docs/calculations

  11. Materials Data on PdF2 (SG:136) by Materials Project

    DOE Data Explorer

    Kristin Persson

    2015-02-09

    Computed materials data using density functional theory calculations. These calculations determine the electronic structure of bulk materials by solving approximations to the Schrodinger equation. For more information, see https://materialsproject.org/docs/calculations

  12. Probability and Cumulative Density Function Methods for the Stochastic Advection-Reaction Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barajas-Solano, David A.; Tartakovsky, Alexandre M.

    We present a cumulative density function (CDF) method for the probabilistic analysis of $d$-dimensional advection-dominated reactive transport in heterogeneous media. We employ a probabilistic approach in which epistemic uncertainty on the spatial heterogeneity of Darcy-scale transport coefficients is modeled in terms of random fields with given correlation structures. Our proposed CDF method employs a modified Large-Eddy-Diffusivity (LED) approach to close and localize the nonlocal equations governing the one-point PDF and CDF of the concentration field, resulting in a $(d + 1)$ dimensional PDE. Compared to the classsical LED localization, the proposed modified LED localization explicitly accounts for the mean-field advectivemore » dynamics over the phase space of the PDF and CDF. To illustrate the accuracy of the proposed closure, we apply our CDF method to one-dimensional single-species reactive transport with uncertain, heterogeneous advection velocities and reaction rates modeled as random fields.« less

  13. Assessment of a Three-Dimensional Line-of-Response Probability Density Function System Matrix for PET

    PubMed Central

    Yao, Rutao; Ramachandra, Ranjith M.; Mahajan, Neeraj; Rathod, Vinay; Gunasekar, Noel; Panse, Ashish; Ma, Tianyu; Jian, Yiqiang; Yan, Jianhua; Carson, Richard E.

    2012-01-01

    To achieve optimal PET image reconstruction through better system modeling, we developed a system matrix that is based on the probability density function for each line of response (LOR-PDF). The LOR-PDFs are grouped by LOR-to-detector incident angles to form a highly compact system matrix. The system matrix was implemented in the MOLAR list mode reconstruction algorithm for a small animal PET scanner. The impact of LOR-PDF on reconstructed image quality was assessed qualitatively as well as quantitatively in terms of contrast recovery coefficient (CRC) and coefficient of variance (COV), and its performance was compared with a fixed Gaussian (iso-Gaussian) line spread function. The LOR-PDFs of 3 coincidence signal emitting sources, 1) ideal positron emitter that emits perfect back-to-back γ rays (γγ) in air; 2) fluorine-18 (18F) nuclide in water; and 3) oxygen-15 (15O) nuclide in water, were derived, and assessed with simulated and experimental phantom data. The derived LOR-PDFs showed anisotropic and asymmetric characteristics dependent on LOR-detector angle, coincidence emitting source, and the medium, consistent with common PET physical principles. The comparison of the iso-Gaussian function and LOR-PDF showed that: 1) without positron range and acolinearity effects, the LOR-PDF achieved better or similar trade-offs of contrast recovery and noise for objects of 4-mm radius or larger, and this advantage extended to smaller objects (e.g. 2-mm radius sphere, 0.6-mm radius hot-rods) at higher iteration numbers; and 2) with positron range and acolinearity effects, the iso-Gaussian achieved similar or better resolution recovery depending on the significance of positron range effect. We conclude that the 3-D LOR-PDF approach is an effective method to generate an accurate and compact system matrix. However, when used directly in expectation-maximization based list-mode iterative reconstruction algorithms such as MOLAR, its superiority is not clear. For this application, using an iso-Gaussian function in MOLAR is a simple but effective technique for PET reconstruction. PMID:23032702

  14. Emg Amplitude Estimators Based on Probability Distribution for Muscle-Computer Interface

    NASA Astrophysics Data System (ADS)

    Phinyomark, Angkoon; Quaine, Franck; Laurillau, Yann; Thongpanja, Sirinee; Limsakul, Chusak; Phukpattaranont, Pornchai

    To develop an advanced muscle-computer interface (MCI) based on surface electromyography (EMG) signal, the amplitude estimations of muscle activities, i.e., root mean square (RMS) and mean absolute value (MAV) are widely used as a convenient and accurate input for a recognition system. Their classification performance is comparable to advanced and high computational time-scale methods, i.e., the wavelet transform. However, the signal-to-noise-ratio (SNR) performance of RMS and MAV depends on a probability density function (PDF) of EMG signals, i.e., Gaussian or Laplacian. The PDF of upper-limb motions associated with EMG signals is still not clear, especially for dynamic muscle contraction. In this paper, the EMG PDF is investigated based on surface EMG recorded during finger, hand, wrist and forearm motions. The results show that on average the experimental EMG PDF is closer to a Laplacian density, particularly for male subject and flexor muscle. For the amplitude estimation, MAV has a higher SNR, defined as the mean feature divided by its fluctuation, than RMS. Due to a same discrimination of RMS and MAV in feature space, MAV is recommended to be used as a suitable EMG amplitude estimator for EMG-based MCIs.

  15. Verification of Anderson Superexchange in MnO via Magnetic Pair Distribution Function Analysis and ab initio Theory

    NASA Astrophysics Data System (ADS)

    Frandsen, Benjamin A.; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J.; Staunton, Julie B.; Billinge, Simon J. L.

    2016-05-01

    We present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ˜1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominated by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. The Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.

  16. Materials Data on K2PdF4 (SG:12) by Materials Project

    DOE Data Explorer

    Kristin Persson

    2014-11-02

    Computed materials data using density functional theory calculations. These calculations determine the electronic structure of bulk materials by solving approximations to the Schrodinger equation. For more information, see https://materialsproject.org/docs/calculations

  17. Probability density function evolution of power systems subject to stochastic variation of renewable energy

    NASA Astrophysics Data System (ADS)

    Wei, J. Q.; Cong, Y. C.; Xiao, M. Q.

    2018-05-01

    As renewable energies are increasingly integrated into power systems, there is increasing interest in stochastic analysis of power systems.Better techniques should be developed to account for the uncertainty caused by penetration of renewables and consequently analyse its impacts on stochastic stability of power systems. In this paper, the Stochastic Differential Equations (SDEs) are used to represent the evolutionary behaviour of the power systems. The stationary Probability Density Function (PDF) solution to SDEs modelling power systems excited by Gaussian white noise is analysed. Subjected to such random excitation, the Joint Probability Density Function (JPDF) solution to the phase angle and angular velocity is governed by the generalized Fokker-Planck-Kolmogorov (FPK) equation. To solve this equation, the numerical method is adopted. Special measure is taken such that the generalized FPK equation is satisfied in the average sense of integration with the assumed PDF. Both weak and strong intensities of the stochastic excitations are considered in a single machine infinite bus power system. The numerical analysis has the same result as the one given by the Monte Carlo simulation. Potential studies on stochastic behaviour of multi-machine power systems with random excitations are discussed at the end.

  18. Progress Toward Affordable High Fidelity Combustion Simulations Using Filtered Density Functions for Hypersonic Flows in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Drozda, Tomasz G.; Quinlan, Jesse R.; Pisciuneri, Patrick H.; Yilmaz, S. Levent

    2012-01-01

    Significant progress has been made in the development of subgrid scale (SGS) closures based on a filtered density function (FDF) for large eddy simulations (LES) of turbulent reacting flows. The FDF is the counterpart of the probability density function (PDF) method, which has proven effective in Reynolds averaged simulations (RAS). However, while systematic progress is being made advancing the FDF models for relatively simple flows and lab-scale flames, the application of these methods in complex geometries and high speed, wall-bounded flows with shocks remains a challenge. The key difficulties are the significant computational cost associated with solving the FDF transport equation and numerically stiff finite rate chemistry. For LES/FDF methods to make a more significant impact in practical applications a pragmatic approach must be taken that significantly reduces the computational cost while maintaining high modeling fidelity. An example of one such ongoing effort is at the NASA Langley Research Center, where the first generation FDF models, namely the scalar filtered mass density function (SFMDF) are being implemented into VULCAN, a production-quality RAS and LES solver widely used for design of high speed propulsion flowpaths. This effort leverages internal and external collaborations to reduce the overall computational cost of high fidelity simulations in VULCAN by: implementing high order methods that allow reduction in the total number of computational cells without loss in accuracy; implementing first generation of high fidelity scalar PDF/FDF models applicable to high-speed compressible flows; coupling RAS/PDF and LES/FDF into a hybrid framework to efficiently and accurately model the effects of combustion in the vicinity of the walls; developing efficient Lagrangian particle tracking algorithms to support robust solutions of the FDF equations for high speed flows; and utilizing finite rate chemistry parametrization, such as flamelet models, to reduce the number of transported reactive species and remove numerical stiffness. This paper briefly introduces the SFMDF model (highlighting key benefits and challenges), and discusses particle tracking for flows with shocks, the hybrid coupled RAS/PDF and LES/FDF model, flamelet generated manifolds (FGM) model, and the Irregularly Portioned Lagrangian Monte Carlo Finite Difference (IPLMCFD) methodology for scalable simulation of high-speed reacting compressible flows.

  19. Regional statistics in confined two-dimensional decaying turbulence.

    PubMed

    Házi, Gábor; Tóth, Gábor

    2011-06-28

    Two-dimensional decaying turbulence in a square container has been simulated using the lattice Boltzmann method. The probability density function (PDF) of the vorticity and the particle distribution functions have been determined at various regions of the domain. It is shown that, after the initial stage of decay, the regional area averaged enstrophy fluctuates strongly around a mean value in time. The ratio of the regional mean and the overall enstrophies increases monotonously with increasing distance from the wall. This function shows a similar shape to the axial mean velocity profile of turbulent channel flows. The PDF of the vorticity peaks at zero and is nearly symmetric considering the statistics in the overall domain. Approaching the wall, the PDFs become skewed owing to the boundary layer.

  20. Galaxy clustering with photometric surveys using PDF redshift information

    DOE PAGES

    Asorey, J.; Carrasco Kind, M.; Sevilla-Noarbe, I.; ...

    2016-03-28

    Here, photometric surveys produce large-area maps of the galaxy distribution, but with less accurate redshift information than is obtained from spectroscopic methods. Modern photometric redshift (photo-z) algorithms use galaxy magnitudes, or colors, that are obtained through multi-band imaging to produce a probability density function (PDF) for each galaxy in the map. We used simulated data to study the effect of using different photo-z estimators to assign galaxies to redshift bins in order to compare their effects on angular clustering and galaxy bias measurements. We found that if we use the entire PDF, rather than a single-point (mean or mode) estimate, the deviations are less biased, especially when using narrow redshift bins. When the redshift bin widths aremore » $$\\Delta z=0.1$$, the use of the entire PDF reduces the typical measurement bias from 5%, when using single point estimates, to 3%.« less

  1. Photon time-interval statistics applied to the analysis of laser heterodyne signal with photon counter

    NASA Astrophysics Data System (ADS)

    Liu, Lisheng; Zhang, Heyong; Guo, Jin; Zhao, Shuai; Wang, Tingfeng

    2012-08-01

    In this paper, we report a mathematical derivation of probability density function (PDF) of time-interval between two successive photoelectrons of the laser heterodyne signal, and give a confirmation of the theoretical result by both numerical simulation and an experiment. The PDF curve of the beat signal displays a series of fluctuations, the period and amplitude of which are respectively determined by the beat frequency and the mixing efficiency. The beat frequency is derived from the frequency of fluctuations accordingly when the PDF curve is measured. This frequency measurement method still works while the traditional Fast Fourier Transform (FFT) algorithm hardly derives the correct peak value of the beat frequency in the condition that we detect 80 MHz beat signal with 8 Mcps (counts per-second) photons count rate, and this indicates an advantage of the PDF method.

  2. Probability density function modeling of scalar mixing from concentrated sources in turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Franzese, P.; Boybeyi, Z.

    2007-11-01

    Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.

  3. Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.

    PubMed

    Herzallah, Randa

    2015-03-01

    Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Inverse estimation of the spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming

    2014-10-01

    Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.

  5. An LES-PBE-PDF approach for modeling particle formation in turbulent reacting flows

    NASA Astrophysics Data System (ADS)

    Sewerin, Fabian; Rigopoulos, Stelios

    2017-10-01

    Many chemical and environmental processes involve the formation of a polydispersed particulate phase in a turbulent carrier flow. Frequently, the immersed particles are characterized by an intrinsic property such as the particle size, and the distribution of this property across a sample population is taken as an indicator for the quality of the particulate product or its environmental impact. In the present article, we propose a comprehensive model and an efficient numerical solution scheme for predicting the evolution of the property distribution associated with a polydispersed particulate phase forming in a turbulent reacting flow. Here, the particulate phase is described in terms of the particle number density whose evolution in both physical and particle property space is governed by the population balance equation (PBE). Based on the concept of large eddy simulation (LES), we augment the existing LES-transported probability density function (PDF) approach for fluid phase scalars by the particle number density and obtain a modeled evolution equation for the filtered PDF associated with the instantaneous fluid composition and particle property distribution. This LES-PBE-PDF approach allows us to predict the LES-filtered fluid composition and particle property distribution at each spatial location and point in time without any restriction on the chemical or particle formation kinetics. In view of a numerical solution, we apply the method of Eulerian stochastic fields, invoking an explicit adaptive grid technique in order to discretize the stochastic field equation for the number density in particle property space. In this way, sharp moving features of the particle property distribution can be accurately resolved at a significantly reduced computational cost. As a test case, we consider the condensation of an aerosol in a developed turbulent mixing layer. Our investigation not only demonstrates the predictive capabilities of the LES-PBE-PDF model but also indicates the computational efficiency of the numerical solution scheme.

  6. The orbital PDF: general inference of the gravitational potential from steady-state tracers

    NASA Astrophysics Data System (ADS)

    Han, Jiaxin; Wang, Wenting; Cole, Shaun; Frenk, Carlos S.

    2016-02-01

    We develop two general methods to infer the gravitational potential of a system using steady-state tracers, I.e. tracers with a time-independent phase-space distribution. Combined with the phase-space continuity equation, the time independence implies a universal orbital probability density function (oPDF) dP(λ|orbit) ∝ dt, where λ is the coordinate of the particle along the orbit. The oPDF is equivalent to Jeans theorem, and is the key physical ingredient behind most dynamical modelling of steady-state tracers. In the case of a spherical potential, we develop a likelihood estimator that fits analytical potentials to the system and a non-parametric method (`phase-mark') that reconstructs the potential profile, both assuming only the oPDF. The methods involve no extra assumptions about the tracer distribution function and can be applied to tracers with any arbitrary distribution of orbits, with possible extension to non-spherical potentials. The methods are tested on Monte Carlo samples of steady-state tracers in dark matter haloes to show that they are unbiased as well as efficient. A fully documented C/PYTHON code implementing our method is freely available at a GitHub repository linked from http://icc.dur.ac.uk/data/#oPDF.

  7. Measuring droplet size distributions from overlapping interferometric particle images.

    PubMed

    Bocanegra Evans, Humberto; Dam, Nico; van der Voort, Dennis; Bertens, Guus; van de Water, Willem

    2015-02-01

    Interferometric particle imaging provides a simple way to measure the probability density function (PDF) of droplet sizes from out-focus images. The optical setup is straightforward, but the interpretation of the data is a problem when particle images overlap. We propose a new way to analyze the images. The emphasis is not on a precise identification of droplets, but on obtaining a good estimate of the PDF of droplet sizes in the case of overlapping particle images. The algorithm is tested using synthetic and experimental data. We next use these methods to measure the PDF of droplet sizes produced by spinning disk aerosol generators. The mean primary droplet diameter agrees with predictions from the literature, but we find a broad distribution of satellite droplet sizes.

  8. A comprehensive model to determine the effects of temperature and species fluctuations on reaction rates in turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Foy, E.; Ronan, G.; Chinitz, W.

    1982-01-01

    A principal element to be derived from modeling turbulent reacting flows is an expression for the reaction rates of the various species involved in any particular combustion process under consideration. A temperature-derived most-likely probability density function (pdf) was used to describe the effects of temperature fluctuations on the Arrhenius reaction rate constant. A most-likely bivariate pdf described the effects of temperature and species concentrations fluctuations on the reaction rate. A criterion is developed for the use of an "appropriate" temperature pdf. The formulation of models to calculate the mean turbulent Arrhenius reaction rate constant and the mean turbulent reaction rate is considered and the results of calculations using these models are presented.

  9. Modulation Based on Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  10. Estimation of Characteristics of Echo Envelope Using RF Echo Signal from the Liver

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Tadashi; Hachiya, Hiroyuki; Kamiyama, Naohisa; Ikeda, Kazuki; Moriyasu, Norifumi

    2001-05-01

    To realize quantitative diagnosis of liver cirrhosis, we have been analyzing the probability density function (PDF) of echo amplitude using B-mode images. However, the B-mode image is affected by the various signal and image processing techniques used in the diagnosis equipment, so a detailed and quantitative analysis is very difficult. In this paper, we analyze the PDF of echo amplitude using RF echo signal and B-mode images of normal and cirrhotic livers, and compare both results to examine the validity of the RF echo signal.

  11. Theory of earthquakes interevent times applied to financial markets

    NASA Astrophysics Data System (ADS)

    Jagielski, Maciej; Kutner, Ryszard; Sornette, Didier

    2017-10-01

    We analyze the probability density function (PDF) of waiting times between financial loss exceedances. The empirical PDFs are fitted with the self-excited Hawkes conditional Poisson process with a long power law memory kernel. The Hawkes process is the simplest extension of the Poisson process that takes into account how past events influence the occurrence of future events. By analyzing the empirical data for 15 different financial assets, we show that the formalism of the Hawkes process used for earthquakes can successfully model the PDF of interevent times between successive market losses.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    In a previous paper Smallwood and Paez (1991) showed how to generate realizations of partially coherent stationary normal time histories with a specified cross-spectral density matrix. This procedure is generalized for the case of multiple inputs with a specified cross-spectral density function and a specified marginal probability density function (pdf) for each of the inputs. The specified pdfs are not required to be Gaussian. A zero memory nonlinear (ZMNL) function is developed for each input to transform a Gaussian or normal time history into a time history with a specified non-Gaussian distribution. The transformation functions have the property that amore » transformed time history will have nearly the same auto spectral density as the original time history. A vector of Gaussian time histories are then generated with the specified cross-spectral density matrix. These waveforms are then transformed into the required time history realizations using the ZMNL function.« less

  13. An electroencephalographic Peak Density Function to detect memorization during the observation of TV commercials.

    PubMed

    Vecchiato, G; Di Flumeri, G; Maglione, A G; Cherubino, P; Kong, W; Trettel, A; Babiloni, F

    2014-01-01

    Nowadays, there is a growing interest in measuring the impact of advertisements through the estimation of cerebral reactions. Several techniques and methods are used and discussed in the consumer neuroscience. In such a context, the present paper provides a novel method to estimate the level of memorization occurred in subjects during the observation of TV commercials. In particular, the present work introduce the Peak Density Function (PDF) as an electroencephalographic (EEG) time-varying variable which is correlated with the cerebral events of memorization of TV commercials. The analysis has been performed on the EEG activity recorded on twenty healthy subjects during the exposition to several advertisements. After the EEG recordings, an interview has been performed to obtain the information about the memorized scenes for all the video clips watched by the subjects. Such information has been put in correlation with the occurrence of transient peaks of EEG synchronization in the theta band, by computing the PDF. The present results show that the increase of PDF is positively correlated, scene by scene, (R=0.46, p<;0.01) with the spontaneous recall of subjects. This technology could be of help for marketers to overcome the drawbacks of the standard marketing tools (e.g., interviews, focus groups) when analyzing the impact of advertisements.

  14. Modeling of Dissipation Element Statistics in Turbulent Non-Premixed Jet Flames

    NASA Astrophysics Data System (ADS)

    Denker, Dominik; Attili, Antonio; Boschung, Jonas; Hennig, Fabian; Pitsch, Heinz

    2017-11-01

    The dissipation element (DE) analysis is a method for analyzing and compartmentalizing turbulent scalar fields. DEs can be described by two parameters, namely the Euclidean distance l between their extremal points and the scalar difference in the respective points Δϕ . The joint probability density function (jPDF) of these two parameters P(Δϕ , l) is expected to suffice for a statistical reconstruction of the scalar field. In addition, reacting scalars show a strong correlation with these DE parameters in both premixed and non-premixed flames. Normalized DE statistics show a remarkable invariance towards changes in Reynolds numbers. This feature of DE statistics was exploited in a Boltzmann-type evolution equation based model for the probability density function (PDF) of the distance between the extremal points P(l) in isotropic turbulence. Later, this model was extended for the jPDF P(Δϕ , l) and then adapted for the use in free shear flows. The effect of heat release on the scalar scales and DE statistics is investigated and an extended model for non-premixed jet flames is introduced, which accounts for the presence of chemical reactions. This new model is validated against a series of DNS of temporally evolving jet flames. European Research Council Project ``Milestone''.

  15. Representation of Probability Density Functions from Orbit Determination using the Particle Filter

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

    2012-01-01

    Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

  16. Understanding star formation in molecular clouds. I. Effects of line-of-sight contamination on the column density structure

    NASA Astrophysics Data System (ADS)

    Schneider, N.; Ossenkopf, V.; Csengeri, T.; Klessen, R. S.; Federrath, C.; Tremblin, P.; Girichidis, P.; Bontemps, S.; André, Ph.

    2015-03-01

    Column-density maps of molecular clouds are one of the most important observables in the context of molecular cloud- and star-formation (SF) studies. With the Herschel satellite it is now possible to precisely determine the column density from dust emission, which is the best tracer of the bulk of material in molecular clouds. However, line-of-sight (LOS) contamination from fore- or background clouds can lead to overestimating the dust emission of molecular clouds, in particular for distant clouds. This implies values that are too high for column density and mass, which can potentially lead to an incorrect physical interpretation of the column density probability distribution function (PDF). In this paper, we use observations and simulations to demonstrate how LOS contamination affects the PDF. We apply a first-order approximation (removing a constant level) to the molecular clouds of Auriga and Maddalena (low-mass star-forming), and Carina and NGC 3603 (both high-mass SF regions). In perfect agreement with the simulations, we find that the PDFs become broader, the peak shifts to lower column densities, and the power-law tail of the PDF for higher column densities flattens after correction. All corrected PDFs have a lognormal part for low column densities with a peak at Av ~ 2 mag, a deviation point (DP) from the lognormal at Av(DP) ~ 4-5 mag, and a power-law tail for higher column densities. Assuming an equivalent spherical density distribution ρ ∝ r- α, the slopes of the power-law tails correspond to αPDF = 1.8, 1.75, and 2.5 for Auriga, Carina, and NGC 3603. These numbers agree within the uncertainties with the values of α ≈ 1.5,1.8, and 2.5 determined from the slope γ (with α = 1-γ) obtained from the radial column density profiles (N ∝ rγ). While α ~ 1.5-2 is consistent with a structure dominated by collapse (local free-fall collapse of individual cores and clumps and global collapse), the higher value of α > 2 for NGC 3603 requires a physical process that leads to additional compression (e.g., expanding ionization fronts). From the small sample of our study, we find that clouds forming only low-mass stars and those also forming high-mass stars have slightly different values for their average column density (1.8 × 1021 cm-2 vs. 3.0 × 1021 cm-2), and they display differences in the overall column density structure. Massive clouds assemble more gas in smaller cloud volumes than low-mass SF ones. However, for both cloud types, the transition of the PDF from lognormal shape into power-law tail is found at the same column density (at Av ~ 4-5 mag). Low-mass and high-mass SF clouds then have the same low column density distribution, most likely dominated by supersonic turbulence. At higher column densities, collapse and external pressure can form the power-law tail. The relative importance of the twoprocesses can vary between clouds and thus lead to the observed differences in PDF and column density structure. Appendices are available in electronic form at http://www.aanda.orgHerschel maps as FITS files are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/575/A79

  17. Enhancement factor statistics of surface enhanced Raman scattering in multiscale heterostructures of nanoparticles.

    PubMed

    Zito, Gianluigi; Rusciano, Giulia; Sasso, Antonio

    2016-08-07

    Suitable metal nanostructures may induce surface-enhanced Raman scattering (SERS) enhancement factors (EFs) large-enough to reach single-molecule sensitivity. However, the gap hot-spot EF probability density function (PDF) has the character of a long-tail distribution, which dramatically mines the reproducibility of SERS experiments. Herein, we carry out electrodynamic calculations based on a 3D finite element method of two plasmonic nanostructures, combined with Monte Carlo simulations of the EF statistics under different external conditions. We compare the PDF produced by a homodimer of nanoparticles with that provided by a self-similar trimer. We show that the PDF is sensitive to the spatial distribution of near-field enhancement specifically supported by the nanostructure geometry. Breaking the symmetry of the plasmonic system is responsible for inducing particular modulations of the PDF tail resembling a multiple Poisson distribution. We also study the influence that molecular diffusion towards the hottest hot-spot, or selective hot-spot targeting, might have on the EF PDF. Our results quantitatively assess the possibility of designing the response of a SERS substrate so as to contain the intrinsic EF PDF variance and significantly improving, in principle, the reproducibility of SERS experiments.

  18. Average symbol error rate for M-ary quadrature amplitude modulation in generalized atmospheric turbulence and misalignment errors

    NASA Astrophysics Data System (ADS)

    Sharma, Prabhat Kumar

    2016-11-01

    A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.

  19. Stratified turbulent Bunsen flames: flame surface analysis and flame surface density modelling

    NASA Astrophysics Data System (ADS)

    Ramaekers, W. J. S.; van Oijen, J. A.; de Goey, L. P. H.

    2012-12-01

    In this paper it is investigated whether the Flame Surface Density (FSD) model, developed for turbulent premixed combustion, is also applicable to stratified flames. Direct Numerical Simulations (DNS) of turbulent stratified Bunsen flames have been carried out, using the Flamelet Generated Manifold (FGM) reduction method for reaction kinetics. Before examining the suitability of the FSD model, flame surfaces are characterized in terms of thickness, curvature and stratification. All flames are in the Thin Reaction Zones regime, and the maximum equivalence ratio range covers 0.1⩽φ⩽1.3. For all flames, local flame thicknesses correspond very well to those observed in stretchless, steady premixed flamelets. Extracted curvature radii and mixing length scales are significantly larger than the flame thickness, implying that the stratified flames all burn in a premixed mode. The remaining challenge is accounting for the large variation in (subfilter) mass burning rate. In this contribution, the FSD model is proven to be applicable for Large Eddy Simulations (LES) of stratified flames for the equivalence ratio range 0.1⩽φ⩽1.3. Subfilter mass burning rate variations are taken into account by a subfilter Probability Density Function (PDF) for the mixture fraction, on which the mass burning rate directly depends. A priori analysis point out that for small stratifications (0.4⩽φ⩽1.0), the replacement of the subfilter PDF (obtained from DNS data) by the corresponding Dirac function is appropriate. Integration of the Dirac function with the mass burning rate m=m(φ), can then adequately model the filtered mass burning rate obtained from filtered DNS data. For a larger stratification (0.1⩽φ⩽1.3), and filter widths up to ten flame thicknesses, a β-function for the subfilter PDF yields substantially better predictions than a Dirac function. Finally, inclusion of a simple algebraic model for the FSD resulted only in small additional deviations from DNS data, thereby rendering this approach promising for application in LES.

  20. On the link between column density distribution and density scaling relation in star formation regions

    NASA Astrophysics Data System (ADS)

    Veltchev, Todor; Donkov, Sava; Stanchev, Orlin

    2017-07-01

    We present a method to derive the density scaling relation ∝ L^{-α} in regions of star formation or in their turbulent vicinities from straightforward binning of the column-density distribution (N-pdf). The outcome of the method is studied for three types of N-pdf: power law (7/5≤α≤5/3), lognormal (0.7≲α≲1.4) and combination of lognormals. In the last case, the method of Stanchev et al. (2015) was also applied for comparison and a very weak (or close to zero) correlation was found. We conclude that the considered `binning approach' reflects rather the local morphology of the N-pdf with no reference to the physical conditions in a considered region. The rough consistency of the derived slopes with the widely adopted Larson's (1981) value α˜1.1 is suggested to support claims that the density-size relation in molecular clouds is indeed an artifact of the observed N-pdf.

  1. The role of presumed probability density functions in the simulation of nonpremixed turbulent combustion

    NASA Astrophysics Data System (ADS)

    Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.

    2016-07-01

    Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.

  2. The density structure and star formation rate of non-isothermal polytropic turbulence

    NASA Astrophysics Data System (ADS)

    Federrath, Christoph; Banerjee, Supratik

    2015-04-01

    The interstellar medium of galaxies is governed by supersonic turbulence, which likely controls the star formation rate (SFR) and the initial mass function (IMF). Interstellar turbulence is non-universal, with a wide range of Mach numbers, magnetic fields strengths and driving mechanisms. Although some of these parameters were explored, most previous works assumed that the gas is isothermal. However, we know that cold molecular clouds form out of the warm atomic medium, with the gas passing through chemical and thermodynamic phases that are not isothermal. Here we determine the role of temperature variations by modelling non-isothermal turbulence with a polytropic equation of state (EOS), where pressure and temperature are functions of gas density, P˜ ρ ^Γ, T ˜ ρΓ - 1. We use grid resolutions of 20483 cells and compare polytropic exponents Γ = 0.7 (soft EOS), Γ = 1 (isothermal EOS) and Γ = 5/3 (stiff EOS). We find a complex network of non-isothermal filaments with more small-scale fragmentation occurring for Γ < 1, while Γ > 1 smoothes out density contrasts. The density probability distribution function (PDF) is significantly affected by temperature variations, with a power-law tail developing at low densities for Γ > 1. In contrast, the PDF becomes closer to a lognormal distribution for Γ ≲ 1. We derive and test a new density variance-Mach number relation that takes Γ into account. This new relation is relevant for theoretical models of the SFR and IMF, because it determines the dense gas mass fraction of a cloud, from which stars form. We derive the SFR as a function of Γ and find that it decreases by a factor of ˜5 from Γ = 0.7 to 5/3.

  3. Verification of Anderson Superexchange in MnO via Magnetic Pair Distribution Function Analysis and ab initio Theory.

    PubMed

    Frandsen, Benjamin A; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J; Staunton, Julie B; Billinge, Simon J L

    2016-05-13

    We present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ∼1  nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominated by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. The Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.

  4. Verification of Anderson superexchange in MnO via magnetic pair distribution function analysis and ab initio theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benjamin A. Frandsen; Brunelli, Michela; Page, Katharine

    Here, we present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ~1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominatedmore » by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. Furthermore, the Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.« less

  5. Verification of Anderson superexchange in MnO via magnetic pair distribution function analysis and ab initio theory

    DOE PAGES

    Benjamin A. Frandsen; Brunelli, Michela; Page, Katharine; ...

    2016-05-11

    Here, we present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ~1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominatedmore » by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. Furthermore, the Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.« less

  6. Probability density function learning by unsupervised neurons.

    PubMed

    Fiori, S

    2001-10-01

    In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.

  7. A Discrete Probability Function Method for the Equation of Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.

  8. EUPDF-II: An Eulerian Joint Scalar Monte Carlo PDF Module : User's Manual

    NASA Technical Reports Server (NTRS)

    Raju, M. S.; Liu, Nan-Suey (Technical Monitor)

    2004-01-01

    EUPDF-II provides the solution for the species and temperature fields based on an evolution equation for PDF (Probability Density Function) and it is developed mainly for application with sprays, combustion, parallel computing, and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase CFD and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with an understanding of the various models involved in the PDF formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. The source code of EUPDF-II will be available with National Combustion Code (NCC) as a complete package.

  9. A comprehensive model to determine the effects of temperature and species fluctuations on reaction rates in turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Goldstein, D.; Magnotti, F.; Chinitz, W.

    1983-01-01

    Reaction rates in turbulent, reacting flows are reviewed. Assumed probability density functions (pdf) modeling of reaction rates is being investigated in relation to a three variable pdf employing a 'most likely pdf' model. Chemical kinetic mechanisms treating hydrogen air combustion is studied. Perfectly stirred reactor modeling of flame stabilizing recirculation regions was used to investigate the stable flame regions for silane, hydrogen, methane, and propane, and for certain mixtures thereof. It is concluded that in general, silane can be counted upon to stabilize flames only when the overall fuel air ratio is close to or greater than unity. For lean flames, silane may tend to destabilize the flame. Other factors favoring stable flames are high initial reactant temperatures and system pressure.

  10. Limit theorems for Lévy walks in d dimensions: rare and bulk fluctuations

    NASA Astrophysics Data System (ADS)

    Fouxon, Itzhak; Denisov, Sergey; Zaburdaev, Vasily; Barkai, Eli

    2017-04-01

    We consider super-diffusive Lévy walks in d≥slant 2 dimensions when the duration of a single step, i.e. a ballistic motion performed by a walker, is governed by a power-law tailed distribution of infinite variance and finite mean. We demonstrate that the probability density function (PDF) of the coordinate of the random walker has two different scaling limits at large times. One limit describes the bulk of the PDF. It is the d-dimensional generalization of the one-dimensional Lévy distribution and is the counterpart of the central limit theorem (CLT) for random walks with finite dispersion. In contrast with the one-dimensional Lévy distribution and the CLT this distribution does not have a universal shape. The PDF reflects anisotropy of the single-step statistics however large the time is. The other scaling limit, the so-called ‘infinite density’, describes the tail of the PDF which determines second (dispersion) and higher moments of the PDF. This limit repeats the angular structure of the PDF of velocity in one step. A typical realization of the walk consists of anomalous diffusive motion (described by anisotropic d-dimensional Lévy distribution) interspersed with long ballistic flights (described by infinite density). The long flights are rare but due to them the coordinate increases so much that their contribution determines the dispersion. We illustrate the concept by considering two types of Lévy walks, with isotropic and anisotropic distributions of velocities. Furthermore, we show that for isotropic but otherwise arbitrary velocity distributions the d-dimensional process can be reduced to a one-dimensional Lévy walk. We briefly discuss the consequences of non-universality for the d  >  1 dimensional fractional diffusion equation, in particular the non-uniqueness of the fractional Laplacian.

  11. A Validation Summary of the NCC Turbulent Reacting/non-reacting Spray Computations

    NASA Technical Reports Server (NTRS)

    Raju, M. S.; Liu, N.-S. (Technical Monitor)

    2000-01-01

    This pper provides a validation summary of the spray computations performed as a part of the NCC (National Combustion Code) development activity. NCC is being developed with the aim of advancing the current prediction tools used in the design of advanced technology combustors based on the multidimensional computational methods. The solution procedure combines the novelty of the application of the scalar Monte Carlo PDF (Probability Density Function) method to the modeling of turbulent spray flames with the ability to perform the computations on unstructured grids with parallel computing. The calculation procedure was applied to predict the flow properties of three different spray cases. One is a nonswirling unconfined reacting spray, the second is a nonswirling unconfined nonreacting spray, and the third is a confined swirl-stabilized spray flame. The comparisons involving both gas-phase and droplet velocities, droplet size distributions, and gas-phase temperatures show reasonable agreement with the available experimental data. The comparisons involve both the results obtained from the use of the Monte Carlo PDF method as well as those obtained from the conventional computational fluid dynamics (CFD) solution. Detailed comparisons in the case of a reacting nonswirling spray clearly highlight the importance of chemistry/turbulence interactions in the modeling of reacting sprays. The results from the PDF and non-PDF methods were found to be markedly different and the PDF solution is closer to the reported experimental data. The PDF computations predict that most of the combustion occurs in a predominantly diffusion-flame environment. However, the non-PDF solution predicts incorrectly that the combustion occurs in a predominantly vaporization-controlled regime. The Monte Carlo temperature distribution shows that the functional form of the PDF for the temperature fluctuations varies substantially from point to point. The results also bring to the fore some of the deficiencies associated with the use of assumed-shape PDF methods in spray computations.

  12. An improved numerical method for the kernel density functional estimation of disperse flow

    NASA Astrophysics Data System (ADS)

    Smith, Timothy; Ranjan, Reetesh; Pantano, Carlos

    2014-11-01

    We present an improved numerical method to solve the transport equation for the one-point particle density function (pdf), which can be used to model disperse flows. The transport equation, a hyperbolic partial differential equation (PDE) with a source term, is derived from the Lagrangian equations for a dilute particle system by treating position and velocity as state-space variables. The method approximates the pdf by a discrete mixture of kernel density functions (KDFs) with space and time varying parameters and performs a global Rayleigh-Ritz like least-square minimization on the state-space of velocity. Such an approximation leads to a hyperbolic system of PDEs for the KDF parameters that cannot be written completely in conservation form. This system is solved using a numerical method that is path-consistent, according to the theory of non-conservative hyperbolic equations. The resulting formulation is a Roe-like update that utilizes the local eigensystem information of the linearized system of PDEs. We will present the formulation of the base method, its higher-order extension and further regularization to demonstrate that the method can predict statistics of disperse flows in an accurate, consistent and efficient manner. This project was funded by NSF Project NSF-DMS 1318161.

  13. Non-Gaussian behavior in jamming / unjamming transition in dense granular materials

    NASA Astrophysics Data System (ADS)

    Atman, A. P. F.; Kolb, E.; Combe, G.; Paiva, H. A.; Martins, G. H. B.

    2013-06-01

    Experiments of penetration of a cylindrical intruder inside a bidimensional dense and disordered granular media were reported recently showing the jamming / unjamming transition. In the present work, we perform molecular dynamics simulations with the same geometry in order to assess both kinematic and static features of jamming / unjamming transition. We study the statistics of the particles velocities at the neighborhood of the intruder to evince that both experiments and simulations present the same qualitative behavior. We observe that the probability density functions (PDF) of velocities deviate from Gaussian depending on the packing fraction of the granular assembly. In order to quantify these deviations we consider a q-Gaussian (Tsallis) function to fit the PDF's. The q-value can be an indication of the presence of long range correlations along the system. We compare the fitted PDF's obtained with those obtained using the stretched exponential, and sketch some conclusions concerning the nature of the correlations along a granular confined flow.

  14. On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.

  15. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  16. Statistical properties of two sine waves in Gaussian noise.

    NASA Technical Reports Server (NTRS)

    Esposito, R.; Wilson, L. R.

    1973-01-01

    A detailed study is presented of some statistical properties of a stochastic process that consists of the sum of two sine waves of unknown relative phase and a normal process. Since none of the statistics investigated seem to yield a closed-form expression, all the derivations are cast in a form that is particularly suitable for machine computation. Specifically, results are presented for the probability density function (pdf) of the envelope and the instantaneous value, the moments of these distributions, and the relative cumulative density function (cdf).

  17. Probability density function shape sensitivity in the statistical modeling of turbulent particle dispersion

    NASA Technical Reports Server (NTRS)

    Litchford, Ron J.; Jeng, San-Mou

    1992-01-01

    The performance of a recently introduced statistical transport model for turbulent particle dispersion is studied here for rigid particles injected into a round turbulent jet. Both uniform and isosceles triangle pdfs are used. The statistical sensitivity to parcel pdf shape is demonstrated.

  18. Conformational statistics of stiff macromolecules as solutions to partial differential equations on the rotation and motion groups

    PubMed

    Chirikjian; Wang

    2000-07-01

    Partial differential equations (PDE's) for the probability density function (PDF) of the position and orientation of the distal end of a stiff macromolecule relative to its proximal end are derived and solved. The Kratky-Porod wormlike chain, the Yamakawa helical wormlike chain, and the original and revised Marko-Siggia models are examples of stiffness models to which the present formulation is applied. The solution technique uses harmonic analysis on the rotation and motion groups to convert PDE's governing the PDF's of interest into linear algebraic equations which have mathematically elegant solutions.

  19. A k-omega multivariate beta PDF for supersonic turbulent combustion

    NASA Technical Reports Server (NTRS)

    Alexopoulos, G. A.; Baurle, R. A.; Hassan, H. A.

    1993-01-01

    In a recent attempt by the authors at predicting measurements in coaxial supersonic turbulent reacting mixing layers involving H2 and air, a number of discrepancies involving the concentrations and their variances were noted. The turbulence model employed was a one-equation model based on the turbulent kinetic energy. This required the specification of a length scale. In an attempt at detecting the cause of the discrepancy, a coupled k-omega joint probability density function (PDF) is employed in conjunction with a Navier-Stokes solver. The results show that improvements resulting from a k-omega model are quite modest.

  20. Noise-induced transitions in a double-well oscillator with nonlinear dissipation.

    PubMed

    Semenov, Vladimir V; Neiman, Alexander B; Vadivasova, Tatyana E; Anishchenko, Vadim S

    2016-05-01

    We develop a model of bistable oscillator with nonlinear dissipation. Using a numerical simulation and an electronic circuit realization of this system we study its response to additive noise excitations. We show that depending on noise intensity the system undergoes multiple qualitative changes in the structure of its steady-state probability density function (PDF). In particular, the PDF exhibits two pitchfork bifurcations versus noise intensity, which we describe using an effective potential and corresponding normal form of the bifurcation. These stochastic effects are explained by the partition of the phase space by the nullclines of the deterministic oscillator.

  1. USING THE HERMITE POLYNOMIALS IN RADIOLOGICAL MONITORING NETWORKS.

    PubMed

    Benito, G; Sáez, J C; Blázquez, J B; Quiñones, J

    2018-03-15

    The most interesting events in Radiological Monitoring Network correspond to higher values of H*(10). The higher doses cause skewness in the probability density function (PDF) of the records, which there are not Gaussian anymore. Within this work the probability of having a dose >2 standard deviations is proposed as surveillance of higher doses. Such probability is estimated by using the Hermite polynomials for reconstructing the PDF. The result is that the probability is ~6 ± 1%, much >2.5% corresponding to Gaussian PDFs, which may be of interest in the design of alarm level for higher doses.

  2. Work statistics of charged noninteracting fermions in slowly changing magnetic fields.

    PubMed

    Yi, Juyeon; Talkner, Peter

    2011-04-01

    We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β^{-1} and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β(2). At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes. ©2011 American Physical Society

  3. Work statistics of charged noninteracting fermions in slowly changing magnetic fields

    NASA Astrophysics Data System (ADS)

    Yi, Juyeon; Talkner, Peter

    2011-04-01

    We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β-1 and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β2. At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes.

  4. A new subgrid-scale representation of hydrometeor fields using a multivariate PDF

    DOE PAGES

    Griffin, Brian M.; Larson, Vincent E.

    2016-06-03

    The subgrid-scale representation of hydrometeor fields is important for calculating microphysical process rates. In order to represent subgrid-scale variability, the Cloud Layers Unified By Binormals (CLUBB) parameterization uses a multivariate probability density function (PDF). In addition to vertical velocity, temperature, and moisture fields, the PDF includes hydrometeor fields. Previously, hydrometeor fields were assumed to follow a multivariate single lognormal distribution. Now, in order to better represent the distribution of hydrometeors, two new multivariate PDFs are formulated and introduced.The new PDFs represent hydrometeors using either a delta-lognormal or a delta-double-lognormal shape. The two new PDF distributions, plus the previous single lognormalmore » shape, are compared to histograms of data taken from large-eddy simulations (LESs) of a precipitating cumulus case, a drizzling stratocumulus case, and a deep convective case. In conclusion, the warm microphysical process rates produced by the different hydrometeor PDFs are compared to the same process rates produced by the LES.« less

  5. Efficient reliability analysis of structures with the rotational quasi-symmetric point- and the maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Dang, Chao; Kong, Fan

    2017-10-01

    This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.

  6. Bayes classification of terrain cover using normalized polarimetric data

    NASA Technical Reports Server (NTRS)

    Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.

    1988-01-01

    The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.

  7. Analysis of scattering statistics and governing distribution functions in optical coherence tomography.

    PubMed

    Sugita, Mitsuro; Weatherbee, Andrew; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-07-01

    The probability density function (PDF) of light scattering intensity can be used to characterize the scattering medium. We have recently shown that in optical coherence tomography (OCT), a PDF formalism can be sensitive to the number of scatterers in the probed scattering volume and can be represented by the K-distribution, a functional descriptor for non-Gaussian scattering statistics. Expanding on this initial finding, here we examine polystyrene microsphere phantoms with different sphere sizes and concentrations, and also human skin and fingernail in vivo. It is demonstrated that the K-distribution offers an accurate representation for the measured OCT PDFs. The behavior of the shape parameter of K-distribution that best fits the OCT scattering results is investigated in detail, and the applicability of this methodology for biological tissue characterization is demonstrated and discussed.

  8. Relationship between Two Types of Coil Packing Densities Relative to Aneurysm Size.

    PubMed

    Park, Keun Young; Kim, Byung Moon; Ihm, Eun Hyun; Baek, Jang Hyun; Kim, Dong Joon; Kim, Dong Ik; Huh, Seung Kon; Lee, Jae Whan

    2015-01-01

    Coil packing density (PD) can be calculated via a formula (PDF ) or software (PDS ). Two types of PD can be different from each other for same aneurysm. This study aimed to evaluate the interobserver agreement and relationships between the 2 types of PD relative to aneurysm size. Consecutive 420 saccular aneurysms were treated with coiling. PD (PDF , [coil volume]/[volume calculated by formula] and PDS, [coil volume]/[volume measured by software]) was calculated and prospectively recorded. Interobserver agreement was evaluated between PDF and PDS . Additionally, the relationships between PDF and PDS relative to aneurysm size were subsequently analyzed. Interobserver agreement for PDF and PDS was excellent (Intraclass correlation coefficient, PDF ; 0.967 and PDS ; 0.998). The ratio of PDF and PDS was greater for smaller aneurysms and converged toward 1.0 as the maximum dimension (DM ) of aneurysm increased. Compared with PDS , PDF was overestimated by a mean of 28% for DM < 5 mm, by 17% for 5 mm ≤ DM < 10 mm, and by 9% for DM ≥ 10 mm (P < 0.01). Interobserver agreement for PDF and PDS was excellent. However, PDF was overestimated in smaller aneurysms and converged to PDS as aneurysm size increased. Copyright © 2014 by the American Society of Neuroimaging.

  9. A Model of Self-Monitoring Blood Glucose Measurement Error.

    PubMed

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  10. Weak lensing shear and aperture mass from linear to non-linear scales

    NASA Astrophysics Data System (ADS)

    Munshi, Dipak; Valageas, Patrick; Barber, Andrew J.

    2004-05-01

    We describe the predictions for the smoothed weak lensing shear, γs, and aperture mass,Map, of two simple analytical models of the density field: the minimal tree model and the stellar model. Both models give identical results for the statistics of the three-dimensional density contrast smoothed over spherical cells and only differ by the detailed angular dependence of the many-body density correlations. We have shown in previous work that they also yield almost identical results for the probability distribution function (PDF) of the smoothed convergence, κs. We find that the two models give rather close results for both the shear and the positive tail of the aperture mass. However, we note that at small angular scales (θs<~ 2 arcmin) the tail of the PDF, , for negative Map shows a strong variation between the two models, and the stellar model actually breaks down for θs<~ 0.4 arcmin and Map < 0. This shows that the statistics of the aperture mass provides a very precise probe of the detailed structure of the density field, as it is sensitive to both the amplitude and the detailed angular behaviour of the many-body correlations. On the other hand, the minimal tree model shows good agreement with numerical simulations over all the scales and redshifts of interest, while both models provide a good description of the PDF, , of the smoothed shear components. Therefore, the shear and the aperture mass provide robust and complementary tools to measure the cosmological parameters as well as the detailed statistical properties of the density field.

  11. The Mass Surface Density Distribution of a High-Mass Protocluster forming from an IRDC and GMC

    NASA Astrophysics Data System (ADS)

    Lim, Wanggi; Tan, Jonathan C.; Kainulainen, Jouni; Ma, Bo; Butler, Michael

    2016-01-01

    We study the probability distribution function (PDF) of mass surface densities of infrared dark cloud (IRDC) G028.36+00.07 and its surrounding giant molecular cloud (GMC). Such PDF analysis has the potential to probe the physical processes that are controlling cloud structure and star formation activity. The chosen IRDC is of particular interest since it has almost 100,000 solar masses within a radius of 8 parsecs, making it one of the most massive, dense molecular structures known and is thus a potential site for the formation of a high-mass, "super star cluster". We study mass surface densities in two ways. First, we use a combination of NIR, MIR and FIR extinction maps that are able to probe the bulk of the cloud structure that is not yet forming stars. This analysis also shows evidence for flattening of the IR extinction law as mass surface density increases, consistent with increasing grain size and/or growth of ice mantles. Second, we study the FIR and sub-mm dust continuum emission from the cloud, especially utlizing Herschel PACS and SPIRE images. We first subtract off the contribution of the foreground diffuse emission that contaminates these images. Next we examine the effects of background subtraction and choice of dust opacities on the derived mass surface density PDF. The final derived PDFs from both methods are compared, including also with other published studies of this cloud. The implications for theoretical models and simulations of cloud structure, including the role of turbulence and magnetic fields, are discussed.

  12. Study of sea-surface slope distribution and its effect on radar backscatter based on Global Precipitation Measurement Ku-band precipitation radar measurements

    NASA Astrophysics Data System (ADS)

    Yan, Qiushuang; Zhang, Jie; Fan, Chenqing; Wang, Jing; Meng, Junmin

    2018-01-01

    The collocated normalized radar backscattering cross-section measurements from the Global Precipitation Measurement (GPM) Ku-band precipitation radar (KuPR) and the winds from the moored buoys are used to study the effect of different sea-surface slope probability density functions (PDFs), including the Gaussian PDF, the Gram-Charlier PDF, and the Liu PDF, on the geometrical optics (GO) model predictions of the radar backscatter at low incidence angles (0 deg to 18 deg) at different sea states. First, the peakedness coefficient in the Liu distribution is determined using the collocations at the normal incidence angle, and the results indicate that the peakedness coefficient is a nonlinear function of the wind speed. Then, the performance of the modified Liu distribution, i.e., Liu distribution using the obtained peakedness coefficient estimate; the Gaussian distribution; and the Gram-Charlier distribution is analyzed. The results show that the GO model predictions with the modified Liu distribution agree best with the KuPR measurements, followed by the predictions with the Gaussian distribution, while the predictions with the Gram-Charlier distribution have larger differences as the total or the slick filtered, not the radar filtered, probability density is included in the distribution. The best-performing distribution changes with incidence angle and changes with wind speed.

  13. Criticality and Phase Transition in Stock-Price Fluctuations

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken; Struzik, Zbigniew R.; Yamamoto, Yoshiharu

    2006-02-01

    We analyze the behavior of the U.S. S&P 500 index from 1984 to 1995, and characterize the non-Gaussian probability density functions (PDF) of the log returns. The temporal dependence of fat tails in the PDF of a ten-minute log return shows a gradual, systematic increase in the probability of the appearance of large increments on approaching black Monday in October 1987, reminiscent of parameter tuning towards criticality. On the occurrence of the black Monday crash, this culminates in an abrupt transition of the scale dependence of the non-Gaussian PDF towards scale-invariance characteristic of critical behavior. These facts suggest the need for revisiting the turbulent cascade paradigm recently proposed for modeling the underlying dynamics of the financial index, to account for time varying—phase transitionlike and scale invariant-critical-like behavior.

  14. Microstructure and Dynamic Failure Properties of Freeze-Cast Materials for Thermobaric Warhead Cases

    DTIC Science & Technology

    2012-12-01

    Function LLNL Lawrence Livermore National Laboratory PDF Probability Density Function PMMA Poly(Methyl Methacrylate) RM Reactive Materials SEM...FREEZE CAST MATERIALS Freeze casting technology combines compounds such as aluminum oxide and poly(methyl methacrylate) ( PMMA ) to develop a...Subsequently, the porous structure can be infiltrated with a variety of materials, such as a standard polymer like PMMA . This hybrid material is believed

  15. Unfolding the laws of star formation: the density distribution of molecular clouds.

    PubMed

    Kainulainen, Jouni; Federrath, Christoph; Henning, Thomas

    2014-04-11

    The formation of stars shapes the structure and evolution of entire galaxies. The rate and efficiency of this process are affected substantially by the density structure of the individual molecular clouds in which stars form. The most fundamental measure of this structure is the probability density function of volume densities (ρ-PDF), which determines the star formation rates predicted with analytical models. This function has remained unconstrained by observations. We have developed an approach to quantify ρ-PDFs and establish their relation to star formation. The ρ-PDFs instigate a density threshold of star formation and allow us to quantify the star formation efficiency above it. The ρ-PDFs provide new constraints for star formation theories and correctly predict several key properties of the star-forming interstellar medium.

  16. Density Weighted FDF Equations for Simulations of Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2011-01-01

    In this report, we briefly revisit the formulation of density weighted filtered density function (DW-FDF) for large eddy simulation (LES) of turbulent reacting flows, which was proposed by Jaberi et al. (Jaberi, F.A., Colucci, P.J., James, S., Givi, P. and Pope, S.B., Filtered mass density function for Large-eddy simulation of turbulent reacting flows, J. Fluid Mech., vol. 401, pp. 85-121, 1999). At first, we proceed the traditional derivation of the DW-FDF equations by using the fine grained probability density function (FG-PDF), then we explore another way of constructing the DW-FDF equations by starting directly from the compressible Navier-Stokes equations. We observe that the terms which are unclosed in the traditional DW-FDF equations are now closed in the newly constructed DW-FDF equations. This significant difference and its practical impact on the computational simulations may deserve further studies.

  17. Assessment of PDF Micromixing Models Using DNS Data for a Two-Step Reaction

    NASA Astrophysics Data System (ADS)

    Tsai, Kuochen; Chakrabarti, Mitali; Fox, Rodney O.; Hill, James C.

    1996-11-01

    Although the probability density function (PDF) method is known to treat the chemical reaction terms exactly, its application to turbulent reacting flows have been overshadowed by the ability to model the molecular mixing terms satisfactorily. In this study, two PDF molecular mixing models, the linear-mean-square-estimation (LMSE or IEM) model and the generalized interaction-by-exchange-with-the-mean (GIEM) model, are compared with the DNS data in decaying turbulence with a two-step parallel-consecutive reaction and two segregated initial conditions: ``slabs" and ``blobs". Since the molecular mixing model is expected to have a strong effect on the mean values of chemical species under such initial conditions, the model evaluation is intended to answer the following questions: Can the PDF models predict the mean values of chemical species correctly with completely segregated initial conditions? (2) Is a single molecular mixing timescale sufficient for the PDF models to predict the mean values with different initial conditions? (3) Will the chemical reactions change the molecular mixing timescales of the reacting species enough to affect the accuracy of the model's prediction for the mean values of chemical species?

  18. A Lagrangian mixing frequency model for transported PDF modeling

    NASA Astrophysics Data System (ADS)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  19. Soot and Spectral Radiation Modeling for a High-Pressure Turbulent Spray Flame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreryo-Fernandez, Sebastian; Paul, Chandan; Sircar, Arpan

    Simulations are performed of a transient high-pressure turbulent n-dodecane spray flame under engine-relevant conditions. An unsteady RANS formulation is used, with detailed chemistry, a semi-empirical two-equation soot model, and a particle-based transported composition probability density function (PDF) method to account for unresolved turbulent fluctuations in composition and temperature. Results from the PDF model are compared with those from a locally well-stirred reactor (WSR) model to quantify the effects of turbulence-chemistry-soot interactions. Computed liquid and vapor penetration versus time, ignition delay, and flame lift-off height are in good agreement with experiment, and relatively small differences are seen between the WSR andmore » PDF models for these global quantities. Computed soot levels and spatial soot distributions from the WSR and PDF models show large differences, with PDF results being in better agreement with experimental measurements. An uncoupled photon Monte Carlo method with line-by-line spectral resolution is used to compute the spectral intensity distribution of the radiation leaving the flame. This provides new insight into the relative importance of molecular gas radiation versus soot radiation, and the importance of turbulent fluctuations on radiative heat transfer.« less

  20. Modelling emission turbulence-radiation interaction by using a hybrid flamelet/stochastic Eulerian field method

    NASA Astrophysics Data System (ADS)

    Consalvi, Jean-Louis

    2017-01-01

    The time-averaged Radiative Transfer Equation (RTE) introduces two unclosed terms, known as `absorption Turbulence Radiation Interaction (TRI)' and `emission TRI'. Emission TRI is related to the non-linear coupling between fluctuations of the absorption coefficient and fluctuations of the Planck function and can be described without introduction any approximation by using a transported PDF method. In this study, a hybrid flamelet/ Stochastic Eulerian Field Model is used to solve the transport equation of the one-point one-time PDF. In this formulation, the steady laminar flamelet model (SLF) is coupled to a joint Probability Density Function (PDF) of mixture fraction, enthalpy defect, scalar dissipation rate, and soot quantities and the PDF transport equation is solved by using a Stochastic Eulerian Field (SEF) method. Soot production is modeled by a semi-empirical model and the spectral dependence of the radiatively participating species, namely combustion products and soot, are computed by using a Narrow Band Correlated-k (NBCK) model. The model is applied to simulate an ethylene/methane turbulent jet flame burning in an oxygen-enriched environment. Model results are compared with the experiments and the effects of taken into account Emission TRI on flame structure, soot production and radiative loss are discussed.

  1. Uncertainty quantification of voice signal production mechanical model and experimental updating

    NASA Astrophysics Data System (ADS)

    Cataldo, E.; Soize, C.; Sampaio, R.

    2013-11-01

    The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.

  2. Probability density function of the intensity of a laser beam propagating in the maritime environment.

    PubMed

    Korotkova, Olga; Avramov-Zamurovic, Svetlana; Malek-Madani, Reza; Nelson, Charles

    2011-10-10

    A number of field experiments measuring the fluctuating intensity of a laser beam propagating along horizontal paths in the maritime environment is performed over sub-kilometer distances at the United States Naval Academy. Both above the ground and over the water links are explored. Two different detection schemes, one photographing the beam on a white board, and the other capturing the beam directly using a ccd sensor, gave consistent results. The probability density function (pdf) of the fluctuating intensity is reconstructed with the help of two theoretical models: the Gamma-Gamma and the Gamma-Laguerre, and compared with the intensity's histograms. It is found that the on-ground experimental results are in good agreement with theoretical predictions. The results obtained above the water paths lead to appreciable discrepancies, especially in the case of the Gamma-Gamma model. These discrepancies are attributed to the presence of the various scatterers along the path of the beam, such as water droplets, aerosols and other airborne particles. Our paper's main contribution is providing a methodology for computing the pdf function of the laser beam intensity in the maritime environment using field measurements.

  3. AzTEC Survey of the Central Molecular Zone: Modeling Dust SEDs and N-PDF with Hierarchical Bayesian Analysis

    NASA Astrophysics Data System (ADS)

    Tang, Yuping; Wang, Daniel; Wilson, Grant; Gutermuth, Robert; Heyer, Mark

    2018-01-01

    We present the AzTEC/LMT survey of dust continuum at 1.1mm on the central ˜ 200pc (CMZ) of our Galaxy. A joint SED analysis of all existing dust continuum surveys on the CMZ is performed, from 160µm to 1.1mm. Our analysis follows a MCMC sampling strategy incorporating the knowledge of PSFs in different maps, which provides unprecedented spacial resolution on distributions of dust temperature, column density and emissivity index. The dense clumps in the CMZ typically show low dust temperature ( 20K), with no significant sign of buried star formation, and a weak evolution of higher emissivity index toward dense peak. A new model is proposed, allowing for varying dust temperature inside a cloud and self-shielding of dust emission, which leads to similar conclusions on dust temperature and grain properties. We further apply a hierarchical Bayesian analysis to infer the column density probability distribution function (N-PDF), while simultaneously removing the Galactic foreground and background emission. The N-PDF shows a steep power-law profile with α > 3, indicating that formation of dense structures are suppressed.

  4. The probability density function (PDF) of Lagrangian Turbulence

    NASA Astrophysics Data System (ADS)

    Birnir, B.

    2012-12-01

    The statistical theory of Lagrangian turbulence is derived from the stochastic Navier-Stokes equation. Assuming that the noise in fully-developed turbulence is a generic noise determined by the general theorems in probability, the central limit theorem and the large deviation principle, we are able to formulate and solve the Kolmogorov-Hopf equation for the invariant measure of the stochastic Navier-Stokes equations. The intermittency corrections to the scaling exponents of the structure functions require a multiplicative (multipling the fluid velocity) noise in the stochastic Navier-Stokes equation. We let this multiplicative noise, in the equation, consists of a simple (Poisson) jump process and then show how the Feynmann-Kac formula produces the log-Poissonian processes, found by She and Leveque, Waymire and Dubrulle. These log-Poissonian processes give the intermittency corrections that agree with modern direct Navier-Stokes simulations (DNS) and experiments. The probability density function (PDF) plays a key role when direct Navier-Stokes simulations or experimental results are compared to theory. The statistical theory of turbulence is determined, including the scaling of the structure functions of turbulence, by the invariant measure of the Navier-Stokes equation and the PDFs for the various statistics (one-point, two-point, N-point) can be obtained by taking the trace of the corresponding invariant measures. Hopf derived in 1952 a functional equation for the characteristic function (Fourier transform) of the invariant measure. In distinction to the nonlinear Navier-Stokes equation, this is a linear functional differential equation. The PDFs obtained from the invariant measures for the velocity differences (two-point statistics) are shown to be the four parameter generalized hyperbolic distributions, found by Barndorff-Nilsen. These PDF have heavy tails and a convex peak at the origin. A suitable projection of the Kolmogorov-Hopf equations is the differential equation determining the generalized hyperbolic distributions. Then we compare these PDFs with DNS results and experimental data.

  5. Cauchy flights in confining potentials

    NASA Astrophysics Data System (ADS)

    Garbaczewski, Piotr

    2010-03-01

    We analyze confining mechanisms for Lévy flights evolving under an influence of external potentials. Given a stationary probability density function (pdf), we address the reverse engineering problem: design a jump-type stochastic process whose target pdf (eventually asymptotic) equals the preselected one. To this end, dynamically distinct jump-type processes can be employed. We demonstrate that one “targeted stochasticity” scenario involves Langevin systems with a symmetric stable noise. Another derives from the Lévy-Schrödinger semigroup dynamics (closely linked with topologically induced super-diffusions), which has no standard Langevin representation. For computational and visualization purposes, the Cauchy driver is employed to exemplify our considerations.

  6. The present state and future directions of PDF methods

    NASA Technical Reports Server (NTRS)

    Pope, S. B.

    1992-01-01

    The objectives of the workshop are presented in viewgraph format, as is this entire article. The objectives are to discuss the present status and the future direction of various levels of engineering turbulence modeling related to Computational Fluid Dynamics (CFD) computations for propulsion; to assure that combustion is an essential part of propulsion; and to discuss Probability Density Function (PDF) methods for turbulent combustion. Essential to the integration of turbulent combustion models is the development of turbulent model, chemical kinetics, and numerical method. Some turbulent combustion models typically used in industry are the k-epsilon turbulent model, the equilibrium/mixing limited combustion, and the finite volume codes.

  7. Can Sgr A* flares reveal the molecular gas density PDF?

    NASA Astrophysics Data System (ADS)

    Churazov, E.; Khabibullin, I.; Sunyaev, R.; Ponti, G.

    2017-11-01

    Illumination of dense gas in the Central Molecular Zone by powerful X-ray flares from Sgr A* leads to prominent structures in the reflected emission that can be observed long after the end of the flare. By studying this emission, we learn about past activity of the supermassive black hole in our Galactic Center and, at the same time, we obtain unique information on the structure of molecular clouds that is essentially impossible to get by other means. Here we discuss how X-ray data can improve our knowledge of both sides of the problem. Existing data already provide (I) an estimate of the flare age, (II) a model-independent lower limit on the luminosity of Sgr A* during the flare and (III) an estimate of the total emitted energy during Sgr A* flare. On the molecular clouds side, the data clearly show a voids-and-walls structure of the clouds and can provide an almost unbiased probe of the mass/density distribution of the molecular gas with the hydrogen column densities lower than few 1023 cm-2. For instance, the probability distribution function of the gas density PDF(ρ) can be measured this way. Future high energy resolution X-ray missions will provide the information on the gas velocities, allowing, for example, a reconstruction of the velocity field structure functions and cross-matching the X-ray and molecular data based on positions and velocities.

  8. Statistical process control for residential treated wood

    Treesearch

    Patricia K. Lebow; Timothy M. Young; Stan Lebow

    2017-01-01

    This paper is the first stage of a study that attempts to improve the process of manufacturing treated lumber through the use of statistical process control (SPC). Analysis of industrial and auditing agency data sets revealed there are differences between the industry and agency probability density functions (pdf) for normalized retention data. Resampling of batches of...

  9. PDF-based heterogeneous multiscale filtration model.

    PubMed

    Gong, Jian; Rutland, Christopher J

    2015-04-21

    Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.

  10. Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2006-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

  11. Effects of combined dimension reduction and tabulation on the simulations of a turbulent premixed flame using a large-eddy simulation/probability density function method

    NASA Astrophysics Data System (ADS)

    Kim, Jeonglae; Pope, Stephen B.

    2014-05-01

    A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.

  12. Modeling Interactions Among Turbulence, Gas-Phase Chemistry, Soot and Radiation Using Transported PDF Methods

    NASA Astrophysics Data System (ADS)

    Haworth, Daniel

    2013-11-01

    The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.

  13. A consistent transported PDF model for treating differential molecular diffusion

    NASA Astrophysics Data System (ADS)

    Wang, Haifeng; Zhang, Pei

    2016-11-01

    Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.

  14. An Investigation of a Hybrid Mixing Model for PDF Simulations of Turbulent Premixed Flames

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Li, Shan; Wang, Hu; Ren, Zhuyin

    2015-11-01

    Predictive simulations of turbulent premixed flames over a wide range of Damköhler numbers in the framework of Probability Density Function (PDF) method still remain challenging due to the deficiency in current micro-mixing models. In this work, a hybrid micro-mixing model, valid in both the flamelet regime and broken reaction zone regime, is proposed. A priori testing of this model is first performed by examining the conditional scalar dissipation rate and conditional scalar diffusion in a 3-D direct numerical simulation dataset of a temporally evolving turbulent slot jet flame of lean premixed H2-air in the thin reaction zone regime. Then, this new model is applied to PDF simulations of the Piloted Premixed Jet Burner (PPJB) flames, which are a set of highly shear turbulent premixed flames and feature strong turbulence-chemistry interaction at high Reynolds and Karlovitz numbers. Supported by NSFC 51476087 and NSFC 91441202.

  15. On the self-organizing process of large scale shear flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newton, Andrew P. L.; Kim, Eun-jin; Liu, Han-Li

    2013-09-15

    Self organization is invoked as a paradigm to explore the processes governing the evolution of shear flows. By examining the probability density function (PDF) of the local flow gradient (shear), we show that shear flows reach a quasi-equilibrium state as its growth of shear is balanced by shear relaxation. Specifically, the PDFs of the local shear are calculated numerically and analytically in reduced 1D and 0D models, where the PDFs are shown to converge to a bimodal distribution in the case of finite correlated temporal forcing. This bimodal PDF is then shown to be reproduced in nonlinear simulation of 2Dmore » hydrodynamic turbulence. Furthermore, the bimodal PDF is demonstrated to result from a self-organizing shear flow with linear profile. Similar bimodal structure and linear profile of the shear flow are observed in gulf stream, suggesting self-organization.« less

  16. Langevin equation with time dependent linear force and periodic load force: stochastic resonance

    NASA Astrophysics Data System (ADS)

    Sau Fa, Kwok

    2017-11-01

    The motion of a particle described by the Langevin equation with constant diffusion coefficient, time dependent linear force (ω (1+α \\cos ({ω }1t))x) and periodic load force ({A}0\\cos ({{Ω }}t)) is investigated. Analytical solutions for the probability density function (PDF) and n-moment are obtained and analysed. For {ω }1\\gg α ω the influence of the periodic term α \\cos ({ω }1t) is negligible to the PDF and n-moment for any time; this result shows that the statistical averages such as n-moments and the PDF have no access to some information of the system. For small and intermediate values of {ω }1 the influence of the periodic term α \\cos ({ω }1t) to the system is also analysed; in particular the system may present multiresonance. The solutions are obtained in a direct and pedagogical manner readily understandable by graduate students.

  17. Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

    NASA Astrophysics Data System (ADS)

    Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

    2012-05-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

  18. External intermittency prediction using AMR solutions of RANS turbulence and transported PDF models

    NASA Astrophysics Data System (ADS)

    Olivieri, D. A.; Fairweather, M.; Falle, S. A. E. G.

    2011-12-01

    External intermittency in turbulent round jets is predicted using a Reynolds-averaged Navier-Stokes modelling approach coupled to solutions of the transported probability density function (pdf) equation for scalar variables. Solutions to the descriptive equations are obtained using a finite-volume method, combined with an adaptive mesh refinement algorithm, applied in both physical and compositional space. This method contrasts with conventional approaches to solving the transported pdf equation which generally employ Monte Carlo techniques. Intermittency-modified eddy viscosity and second-moment turbulence closures are used to accommodate the effects of intermittency on the flow field, with the influence of intermittency also included, through modifications to the mixing model, in the transported pdf equation. Predictions of the overall model are compared with experimental data on the velocity and scalar fields in a round jet, as well as against measurements of intermittency profiles and scalar pdfs in a number of flows, with good agreement obtained. For the cases considered, predictions based on the second-moment turbulence closure are clearly superior, although both turbulence models give realistic predictions of the bimodal scalar pdfs observed experimentally.

  19. Verification and Improvement of Flamelet Approach for Non-Premixed Flames

    NASA Technical Reports Server (NTRS)

    Zaitsev, S.; Buriko, Yu.; Guskov, O.; Kopchenov, V.; Lubimov, D.; Tshepin, S.; Volkov, D.

    1997-01-01

    Studies in the mathematical modeling of the high-speed turbulent combustion has received renewal attention in the recent years. The review of fundamentals, approaches and extensive bibliography was presented by Bray, Libbi and Williams. In order to obtain accurate predictions for turbulent combustible flows, the effects of turbulent fluctuations on the chemical source terms should be taken into account. The averaging of chemical source terms requires to utilize probability density function (PDF) model. There are two main approaches which are dominant in high-speed combustion modeling now. In the first approach, PDF form is assumed based on intuitia of modelliers (see, for example, Spiegler et.al.; Girimaji; Baurle et.al.). The second way is much more elaborate and it is based on the solution of evolution equation for PDF. This approach was proposed by S.Pope for incompressible flames. Recently, it was modified for modeling of compressible flames in studies of Farschi; Hsu; Hsu, Raji, Norris; Eifer, Kollman. But its realization in CFD is extremely expensive in computations due to large multidimensionality of PDF evolution equation (Baurle, Hsu, Hassan).

  20. An Overview of the NCC Spray/Monte-Carlo-PDF Computations

    NASA Technical Reports Server (NTRS)

    Raju, M. S.; Liu, Nan-Suey (Technical Monitor)

    2000-01-01

    This paper advances the state-of-the-art in spray computations with some of our recent contributions involving scalar Monte Carlo PDF (Probability Density Function), unstructured grids and parallel computing. It provides a complete overview of the scalar Monte Carlo PDF and Lagrangian spray computer codes developed for application with unstructured grids and parallel computing. Detailed comparisons for the case of a reacting non-swirling spray clearly highlight the important role that chemistry/turbulence interactions play in the modeling of reacting sprays. The results from the PDF and non-PDF methods were found to be markedly different and the PDF solution is closer to the reported experimental data. The PDF computations predict that some of the combustion occurs in a predominantly premixed-flame environment and the rest in a predominantly diffusion-flame environment. However, the non-PDF solution predicts wrongly for the combustion to occur in a vaporization-controlled regime. Near the premixed flame, the Monte Carlo particle temperature distribution shows two distinct peaks: one centered around the flame temperature and the other around the surrounding-gas temperature. Near the diffusion flame, the Monte Carlo particle temperature distribution shows a single peak. In both cases, the computed PDF's shape and strength are found to vary substantially depending upon the proximity to the flame surface. The results bring to the fore some of the deficiencies associated with the use of assumed-shape PDF methods in spray computations. Finally, we end the paper by demonstrating the computational viability of the present solution procedure for its use in 3D combustor calculations by summarizing the results of a 3D test case with periodic boundary conditions. For the 3D case, the parallel performance of all the three solvers (CFD, PDF, and spray) has been found to be good when the computations were performed on a 24-processor SGI Origin work-station.

  1. Nonisotropic turbulence: A turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Liu, Kunlun

    2005-11-01

    The probability density function (PDF) and the two-point correlations of a flat-plate turbulent boundary layer subjected to the zero pressure gradient have been calculated by the direct numerical simulation. It is known that the strong shear force near the wall will deform the vortices and develop some stretched coherent structures like streaks and hairpins, which eventually cause the nonisotropy of wall shear flows. The PDF and the two-point correlations of isotropic flows have been studied for a long time. However, our knowledge about the influence of shear force on the PDF and two-point correlations is still very limited. This study is intended to investigate such influence by using a numerical simulation. Results are presented for a case having a Mach number of M=0.1 and a Reynolds number 2000, based on displacement thickness. The results indicate that the PDF of the streamwise velocity is Lognormal, the PDF of normal velocity is approximately Cauchy, and the PDF of the spanwise velocity is nearly Gaussian. The mean and variance of those PDFs vary according to the distance from the wall. And the two-point correlations are homogenous in the spanwise direction, have a slightly variation in the streamwise direction, but change a lot in the normal direction. Rww or Rvv can be represented as elliptic balls. And the well-chosen normalized system can enable Rww and Rvv to be self-similar.

  2. Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-07-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.

  3. A PDF projection method: A pressure algorithm for stand-alone transported PDFs

    NASA Astrophysics Data System (ADS)

    Ghorbani, Asghar; Steinhilber, Gerd; Markus, Detlev; Maas, Ulrich

    2015-03-01

    In this paper, a new formulation of the projection approach is introduced for stand-alone probability density function (PDF) methods. The method is suitable for applications in low-Mach number transient turbulent reacting flows. The method is based on a fractional step method in which first the advection-diffusion-reaction equations are modelled and solved within a particle-based PDF method to predict an intermediate velocity field. Then the mean velocity field is projected onto a space where the continuity for the mean velocity is satisfied. In this approach, a Poisson equation is solved on the Eulerian grid to obtain the mean pressure field. Then the mean pressure is interpolated at the location of each stochastic Lagrangian particle. The formulation of the Poisson equation avoids the time derivatives of the density (due to convection) as well as second-order spatial derivatives. This in turn eliminates the major sources of instability in the presence of stochastic noise that are inherent in particle-based PDF methods. The convergence of the algorithm (in the non-turbulent case) is investigated first by the method of manufactured solutions. Then the algorithm is applied to a one-dimensional turbulent premixed flame in order to assess the accuracy and convergence of the method in the case of turbulent combustion. As a part of this work, we also apply the algorithm to a more realistic flow, namely a transient turbulent reacting jet, in order to assess the performance of the method.

  4. The study of PDF turbulence models in combustion

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    In combustion computations, it is known that the predictions of chemical reaction rates are poor if conventional turbulence models are used. The probability density function (pdf) method seems to be the only alternative that uses local instantaneous values of the temperature, density, etc., in predicting chemical reaction rates, and thus is the only viable approach for more accurate turbulent combustion calculations. The fact that the pdf equation has a very large dimensionality renders finite difference schemes extremely demanding on computer memories and thus impractical. A logical alternative is the Monte Carlo scheme. Since CFD has a certain maturity as well as acceptance, it seems that the use of a combined CFD and Monte Carlo scheme is more beneficial. Therefore, a scheme is chosen that uses a conventional CFD flow solver in calculating the flow field properties such as velocity, pressure, etc., while the chemical reaction part is solved using a Monte Carlo scheme. The discharge of a heated turbulent plane jet into quiescent air was studied. Experimental data for this problem shows that when the temperature difference between the jet and the surrounding air is small, buoyancy effect can be neglected and the temperature can be treated as a passive scalar. The fact that jet flows have a self-similar solution lends convenience in the modeling study. Futhermore, the existence of experimental data for turbulent shear stress and temperature variance make the case ideal for the testing of pdf models wherein these values can be directly evaluated.

  5. An Integrated Nonlinear Analysis library - (INA) for solar system plasma turbulence

    NASA Astrophysics Data System (ADS)

    Munteanu, Costel; Kovacs, Peter; Echim, Marius; Koppan, Andras

    2014-05-01

    We present an integrated software library dedicated to the analysis of time series recorded in space and adapted to investigate turbulence, intermittency and multifractals. The library is written in MATLAB and provides a graphical user interface (GUI) customized for the analysis of space physics data available online like: Coordinated Data Analysis Web (CDAWeb), Automated Multi Dataset Analysis system (AMDA), Planetary Science Archive (PSA), World Data Center Kyoto (WDC), Ulysses Final Archive (UFA) and Cluster Active Archive (CAA). Three main modules are already implemented in INA : the Power Spectral Density (PSD) Analysis, the Wavelet and Intemittency Analysis and the Probability Density Functions (PDF) analysis.The layered structure of the software allows the user to easily switch between different modules/methods while retaining the same time interval for the analysis. The wavelet analysis module includes algorithms to compute and analyse the PSD, the Scalogram, the Local Intermittency Measure (LIM) or the Flatness parameter. The PDF analysis module includes algorithms for computing the PDFs for a range of scales and parameters fully customizable by the user; it also computes the Flatness parameter and enables fast comparison with standard PDF profiles like, for instance, the Gaussian PDF. The library has been already tested on Cluster and Venus Express data and we will show relevant examples. Research supported by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 313038/STORM, and a grant of the Romanian Ministry of National Education, CNCS UEFISCDI, project number PN-II-ID PCE-2012-4-0418.

  6. On the Five-Moment Hamburger Maximum Entropy Reconstruction

    NASA Astrophysics Data System (ADS)

    Summy, D. P.; Pullin, D. I.

    2018-05-01

    We consider the Maximum Entropy Reconstruction (MER) as a solution to the five-moment truncated Hamburger moment problem in one dimension. In the case of five monomial moment constraints, the probability density function (PDF) of the MER takes the form of the exponential of a quartic polynomial. This implies a possible bimodal structure in regions of moment space. An analytical model is developed for the MER PDF applicable near a known singular line in a centered, two-component, third- and fourth-order moment (μ _3 , μ _4 ) space, consistent with the general problem of five moments. The model consists of the superposition of a perturbed, centered Gaussian PDF and a small-amplitude packet of PDF-density, called the outlying moment packet (OMP), sitting far from the mean. Asymptotic solutions are obtained which predict the shape of the perturbed Gaussian and both the amplitude and position on the real line of the OMP. The asymptotic solutions show that the presence of the OMP gives rise to an MER solution that is singular along a line in (μ _3 , μ _4 ) space emanating from, but not including, the point representing a standard normal distribution, or thermodynamic equilibrium. We use this analysis of the OMP to develop a numerical regularization of the MER, creating a procedure we call the Hybrid MER (HMER). Compared with the MER, the HMER is a significant improvement in terms of robustness and efficiency while preserving accuracy in its prediction of other important distribution features, such as higher order moments.

  7. Ensemble Kalman filtering in presence of inequality constraints

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P. J.

    2009-04-01

    Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.

  8. Optimal nonlinear codes for the perception of natural colours.

    PubMed

    von der Twer, T; MacLeod, D I

    2001-08-01

    We discuss how visual nonlinearity can be optimized for the precise representation of environmental inputs. Such optimization leads to neural signals with a compressively nonlinear input-output function the gradient of which is matched to the cube root of the probability density function (PDF) of the environmental input values (and not to the PDF directly as in histogram equalization). Comparisons between theory and psychophysical and electrophysiological data are roughly consistent with the idea that parvocellular (P) cells are optimized for precision representation of colour: their contrast-response functions span a range appropriately matched to the environmental distribution of natural colours along each dimension of colour space. Thus P cell codes for colour may have been selected to minimize error in the perceptual estimation of stimulus parameters for natural colours. But magnocellular (M) cells have a much stronger than expected saturating nonlinearity; this supports the view that the function of M cells is mainly to detect boundaries rather than to specify contrast or lightness.

  9. Improved Modeling of Finite-Rate Turbulent Combustion Processes in Research Combustors

    NASA Technical Reports Server (NTRS)

    VanOverbeke, Thomas J.

    1998-01-01

    The objective of this thesis is to further develop and test a stochastic model of turbulent combustion in recirculating flows. There is a requirement to increase the accuracy of multi-dimensional combustion predictions. As turbulence affects reaction rates, this interaction must be more accurately evaluated. In this work a more physically correct way of handling the interaction of turbulence on combustion is further developed and tested. As turbulence involves randomness, stochastic modeling is used. Averaged values such as temperature and species concentration are found by integrating the probability density function (pdf) over the range of the scalar. The model in this work does not assume the pdf type, but solves for the evolution of the pdf using the Monte Carlo solution technique. The model is further developed by including a more robust reaction solver, by using accurate thermodynamics and by more accurate transport elements. The stochastic method is used with Semi-Implicit Method for Pressure-Linked Equations. The SIMPLE method is used to solve for velocity, pressure, turbulent kinetic energy and dissipation. The pdf solver solves for temperature and species concentration. Thus, the method is partially familiar to combustor engineers. The method is compared to benchmark experimental data and baseline calculations. The baseline method was tested on isothermal flows, evaporating sprays and combusting sprays. Pdf and baseline predictions were performed for three diffusion flames and one premixed flame. The pdf method predicted lower combustion rates than the baseline method in agreement with the data, except for the premixed flame. The baseline and stochastic predictions bounded the experimental data for the premixed flame. The use of a continuous mixing model or relax to mean mixing model had little effect on the prediction of average temperature. Two grids were used in a hydrogen diffusion flame simulation. Grid density did not effect the predictions except for peak temperature and tangential velocity. The hybrid pdf method did take longer and required more memory, but has a theoretical basis to extend to many reaction steps which cannot be said of current turbulent combustion models.

  10. Light enpolarization by disordered media under partial polarized illumination: the role of cross-scattering coefficients.

    PubMed

    Zerrad, M; Soriano, G; Ghabbach, A; Amra, C

    2013-02-11

    We show how disordered media allow to increase the local degree of polarization (DOP) of an arbitrary (partial) polarized incident beam. The role of cross-scattering coefficients is emphasized, together with the probability density functions (PDF) of the scattering DOP. The average DOP of scattering is calculated versus the incident illumination DOP.

  11. Comparison of PDF and Moment Closure Methods in the Modeling of Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Norris, Andrew T.; Hsu, Andrew T.

    1994-01-01

    In modeling turbulent reactive flows, Probability Density Function (PDF) methods have an advantage over the more traditional moment closure schemes in that the PDF formulation treats the chemical reaction source terms exactly, while moment closure methods are required to model the mean reaction rate. The common model used is the laminar chemistry approximation, where the effects of turbulence on the reaction are assumed negligible. For flows with low turbulence levels and fast chemistry, the difference between the two methods can be expected to be small. However for flows with finite rate chemistry and high turbulence levels, significant errors can be expected in the moment closure method. In this paper, the ability of the PDF method and the moment closure scheme to accurately model a turbulent reacting flow is tested. To accomplish this, both schemes were used to model a CO/H2/N2- air piloted diffusion flame near extinction. Identical thermochemistry, turbulence models, initial conditions and boundary conditions are employed to ensure a consistent comparison can be made. The results of the two methods are compared to experimental data as well as to each other. The comparison reveals that the PDF method provides good agreement with the experimental data, while the moment closure scheme incorrectly shows a broad, laminar-like flame structure.

  12. Statistics of partially-polarized fields: beyond the Stokes vector and coherence matrix

    NASA Astrophysics Data System (ADS)

    Charnotskii, Mikhail

    2017-08-01

    Traditionally, the partially-polarized light is characterized by the four Stokes parameters. Equivalent description is also provided by correlation tensor of the optical field. These statistics specify only the second moments of the complex amplitudes of the narrow-band two-dimensional electric field of the optical wave. Electric field vector of the random quasi monochromatic wave is a nonstationary oscillating two-dimensional real random variable. We introduce a novel statistical description of these partially polarized waves: the Period-Averaged Probability Density Function (PA-PDF) of the field. PA-PDF contains more information on the polarization state of the field than the Stokes vector. In particular, in addition to the conventional distinction between the polarized and depolarized components of the field PA-PDF allows to separate the coherent and fluctuating components of the field. We present several model examples of the fields with identical Stokes vectors and very distinct shapes of PA-PDF. In the simplest case of the nonstationary, oscillating normal 2-D probability distribution of the real electrical field and stationary 4-D probability distribution of the complex amplitudes, the newly-introduced PA-PDF is determined by 13 parameters that include the first moments and covariance matrix of the quadrature components of the oscillating vector field.

  13. Persistent Deterioration of Functioning (PDF) and change in well-being in older persons.

    PubMed

    Jonker, Angèle A; Comijs, Hannie C; Knipscheer, Kees C; Deeg, Dorly J

    2008-10-01

    It is often assumed that aging is accompanied by diverse and constant functional and cognitive decline, and it is therefore surprising that the well-being of older persons does not appear to decline in the same way. This study investigates longitudinally whether well-being in older persons changes due to Persistent Deterioration of Functioning (PDF). Data were collected in the context of the Longitudinal Aging Study Amsterdam (LASA). Conditions of PDF are persistent decline in cognitive functioning, physical functioning and increase in chronic diseases. Measurements of well-being included life satisfaction, positive affect, and valuation of life. T-tests were used to analyse mean difference scores for well-being, and univariate and multivariate regression analyses were performed to examine changes in three well-being outcomes in relation to PDF. Cross-sectional analyses showed significant differences and associations between the two PDF subgroups and non- PDF for well-being at T3. In longitudinal analyses, we found significant decreases in and associations with wellbeing over time in respondents fulfilling one PDF condition (mild PDF). For respondents fulfilling two or more PDF conditions (severe PDF), longitudinally no significant associations were found. Cognitive aspects of well-being (life satisfaction and valuation of life) and the affective element (positive affect) of well-being appear to be influenced negatively by mild PDF, whereas well-being does not seem to be diminished in persons with more severe PDF. This may be due to the ability to accept finally the inevitable situation of severe PDF.

  14. A hybrid probabilistic/spectral model of scalar mixing

    NASA Astrophysics Data System (ADS)

    Vaithianathan, T.; Collins, Lance

    2002-11-01

    In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.

  15. Statistics of multi-look AIRSAR imagery: A comparison of theory with measurements

    NASA Technical Reports Server (NTRS)

    Lee, J. S.; Hoppel, K. W.; Mango, S. A.

    1993-01-01

    The intensity and amplitude statistics of SAR images, such as L-Band HH for SEASAT and SIR-B, and C-Band VV for ERS-1 have been extensively investigated for various terrain, ground cover and ocean surfaces. Less well-known are the statistics between multiple channels of polarimetric of interferometric SAR's, especially for the multi-look processed data. In this paper, we investigate the probability density functions (PDF's) of phase differences, the magnitude of complex products and the amplitude ratios, between polarization channels (i.e. HH, HV, and VV) using 1-look and 4-look AIRSAR polarimetric data. Measured histograms are compared with theoretical PDF's which were recently derived based on a complex Gaussian model.

  16. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.

    PubMed

    Shalymov, Dmitry S; Fradkov, Alexander L

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.

  17. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle

    PubMed Central

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886

  18. Reactions and Transport: Diffusion, Inertia, and Subdiffusion

    NASA Astrophysics Data System (ADS)

    Méndez, Vicenç; Fedotov, Sergei; Horsthemke, Werner

    Particles, such as molecules, atoms, or ions, and individuals, such as cells or animals, move in space driven by various forces or cues. In particular, particles or individuals can move randomly, undergo velocity jump processes or spatial jump processes [333]. The steps of the random walk can be independent or correlated, unbiased or biased. The probability density function (PDF) for the jump length can decay rapidly or exhibit a heavy tail. Similarly, the PDF for the waiting time between successive jumps can decay rapidly or exhibit a heavy tail. We will discuss these various possibilities in detail in Chap. 3. Below we provide an introduction to three transport processes: standard diffusion, transport with inertia, and anomalous diffusion.

  19. Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows

    NASA Technical Reports Server (NTRS)

    He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.

  20. Application of PDF methods to compressible turbulent flows

    NASA Astrophysics Data System (ADS)

    Delarue, B. J.; Pope, S. B.

    1997-09-01

    A particle method applying the probability density function (PDF) approach to turbulent compressible flows is presented. The method is applied to several turbulent flows, including the compressible mixing layer, and good agreement is obtained with experimental data. The PDF equation is solved using a Lagrangian/Monte Carlo method. To accurately account for the effects of compressibility on the flow, the velocity PDF formulation is extended to include thermodynamic variables such as the pressure and the internal energy. The mean pressure, the determination of which has been the object of active research over the last few years, is obtained directly from the particle properties. It is therefore not necessary to link the PDF solver with a finite-volume type solver. The stochastic differential equations (SDE) which model the evolution of particle properties are based on existing second-order closures for compressible turbulence, limited in application to low turbulent Mach number flows. Tests are conducted in decaying isotropic turbulence to compare the performances of the PDF method with the Reynolds-stress closures from which it is derived, and in homogeneous shear flows, at which stage comparison with direct numerical simulation (DNS) data is conducted. The model is then applied to the plane compressible mixing layer, reproducing the well-known decrease in the spreading rate with increasing compressibility. It must be emphasized that the goal of this paper is not as much to assess the performance of models of compressibility effects, as it is to present an innovative and consistent PDF formulation designed for turbulent inhomogeneous compressible flows, with the aim of extending it further to deal with supersonic reacting flows.

  1. A model for AGN variability on multiple time-scales

    NASA Astrophysics Data System (ADS)

    Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.

    2018-05-01

    We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.

  2. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    PubMed

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.

  3. Natural analogue study of CO2 storage monitoring using probability statistics of CO2-rich groundwater chemistry

    NASA Astrophysics Data System (ADS)

    Kim, K. K.; Hamm, S. Y.; Kim, S. O.; Yun, S. T.

    2016-12-01

    For confronting global climate change, carbon capture and storage (CCS) is one of several very useful strategies as using capture of greenhouse gases like CO2 spewed from stacks and then isolation of the gases in underground geologic storage. CO2-rich groundwater could be produced by CO2 dissolution into fresh groundwater around a CO2 storage site. As consequence, natural analogue studies related to geologic storage provide insights into future geologic CO2 storage sites as well as can provide crucial information on the safety and security of geologic sequestration, the long-term impact of CO2 storage on the environment, and field operation and monitoring that could be implemented for geologic sequestration. In this study, we developed CO2 leakage monitoring method using probability density function (PDF) by characterizing naturally occurring CO2-rich groundwater. For the study, we used existing data of CO2-rich groundwaters in different geological regions (Gangwondo, Gyeongsangdo, and Choongchungdo provinces) in South Korea. Using PDF method and QI (quantitative index), we executed qualitative and quantitative comparisons among local areas and chemical constituents. Geochemical properties of groundwater with/without CO2 as the PDF forms proved that pH, EC, TDS, HCO3-, Ca2+, Mg2+, and SiO2 were effective monitoring parameters for carbonated groundwater in the case of CO2leakage from an underground storage site. KEY WORDS: CO2-rich groundwater, CO2 storage site, monitoring parameter, natural analogue, probability density function (PDF), QI_quantitative index Acknowledgement This study was supported by the "Basic Science Research Program through the National Research Foundation of Korea (NRF), which is funded by the Ministry of Education (NRF-2013R1A1A2058186)" and the "R&D Project on Environmental Management of Geologic CO2 Storage" from KEITI (Project number: 2014001810003).

  4. Modeling service time reliability in urban ferry system

    NASA Astrophysics Data System (ADS)

    Chen, Yifan; Luo, Sida; Zhang, Mengke; Shen, Hanxia; Xin, Feifei; Luo, Yujie

    2017-09-01

    The urban ferry system can carry a large number of travelers, which may alleviate the pressure on road traffic. As an indicator of its service quality, service time reliability (STR) plays an essential part in attracting travelers to the ferry system. A wide array of studies have been conducted to analyze the STR of land transportation. However, the STR of ferry systems has received little attention in the transportation literature. In this study, a model was established to obtain the STR in urban ferry systems. First, the probability density function (PDF) of the service time provided by ferry systems was constructed. Considering the deficiency of the queuing theory, this PDF was determined by Bayes’ theorem. Then, to validate the function, the results of the proposed model were compared with those of the Monte Carlo simulation. With the PDF, the reliability could be determined mathematically by integration. Results showed how the factors including the frequency, capacity, time schedule and ferry waiting time affected the STR under different degrees of congestion in ferry systems. Based on these results, some strategies for improving the STR were proposed. These findings are of great significance to increasing the share of ferries among various urban transport modes.

  5. A Sampling-Based Bayesian Approach for Cooperative Multiagent Online Search With Resource Constraints.

    PubMed

    Xiao, Hu; Cui, Rongxin; Xu, Demin

    2018-06-01

    This paper presents a cooperative multiagent search algorithm to solve the problem of searching for a target on a 2-D plane under multiple constraints. A Bayesian framework is used to update the local probability density functions (PDFs) of the target when the agents obtain observation information. To obtain the global PDF used for decision making, a sampling-based logarithmic opinion pool algorithm is proposed to fuse the local PDFs, and a particle sampling approach is used to represent the continuous PDF. Then the Gaussian mixture model (GMM) is applied to reconstitute the global PDF from the particles, and a weighted expectation maximization algorithm is presented to estimate the parameters of the GMM. Furthermore, we propose an optimization objective which aims to guide agents to find the target with less resource consumptions, and to keep the resource consumption of each agent balanced simultaneously. To this end, a utility function-based optimization problem is put forward, and it is solved by a gradient-based approach. Several contrastive simulations demonstrate that compared with other existing approaches, the proposed one uses less overall resources and shows a better performance of balancing the resource consumption.

  6. Probability density function of a puff dispersing from the wall of a turbulent channel

    NASA Astrophysics Data System (ADS)

    Nguyen, Quoc; Papavassiliou, Dimitrios

    2015-11-01

    Study of dispersion of passive contaminants in turbulence has proved to be helpful in understanding fundamental heat and mass transfer phenomena. Many simulation and experimental works have been carried out to locate and track motions of scalar markers in a flow. One method is to combine Direct Numerical Simulation (DNS) and Lagrangian Scalar Tracking (LST) to record locations of markers. While this has proved to be useful, high computational cost remains a concern. In this study, we develop a model that could reproduce results obtained by DNS and LST for turbulent flow. Puffs of markers with different Schmidt numbers were released into a flow field at a frictional Reynolds number of 150. The point of release was at the channel wall, so that both diffusion and convection contribute to the puff dispersion pattern, defining different stages of dispersion. Based on outputs from DNS and LST, we seek the most suitable and feasible probability density function (PDF) that represents distribution of markers in the flow field. The PDF would play a significant role in predicting heat and mass transfer in wall turbulence, and would prove to be helpful where DNS and LST are not always available.

  7. Understanding star formation in molecular clouds. II. Signatures of gravitational collapse of IRDCs

    NASA Astrophysics Data System (ADS)

    Schneider, N.; Csengeri, T.; Klessen, R. S.; Tremblin, P.; Ossenkopf, V.; Peretto, N.; Simon, R.; Bontemps, S.; Federrath, C.

    2015-06-01

    We analyse column density and temperature maps derived from Herschel dust continuum observations of a sample of prominent, massive infrared dark clouds (IRDCs) i.e. G11.11-0.12, G18.82-0.28, G28.37+0.07, and G28.53-0.25. We disentangle the velocity structure of the clouds using 13CO 1→0 and 12CO 3→2 data, showing that these IRDCs are the densest regions in massive giant molecular clouds (GMCs) and not isolated features. The probability distribution function (PDF) of column densities for all clouds have a power-law distribution over all (high) column densities, regardless of the evolutionary stage of the cloud: G11.11-0.12, G18.82-0.28, and G28.37+0.07 contain (proto)-stars, while G28.53-0.25 shows no signs of star formation. This is in contrast to the purely log-normal PDFs reported for near and/or mid-IR extinction maps. We only find a log-normal distribution for lower column densities, if we perform PDFs of the column density maps of the whole GMC in which the IRDCs are embedded. By comparing the PDF slope and the radial column density profile of three of our clouds, we attribute the power law to the effect of large-scale gravitational collapse and to local free-fall collapse of pre- and protostellar cores for the highest column densities. A significant impact on the cloud properties from radiative feedback is unlikely because the clouds are mostly devoid of star formation. Independent from the PDF analysis, we find infall signatures in the spectral profiles of 12CO for G28.37+0.07 and G11.11-0.12, supporting the scenario of gravitational collapse. Our results are in line with earlier interpretations that see massive IRDCs as the densest regions within GMCs, which may be the progenitors of massive stars or clusters. At least some of the IRDCs are probably the same features as ridges (high column density regions with N> 1023 cm-2 over small areas), which were defined for nearby IR-bright GMCs. Because IRDCs are only confined to the densest (gravity dominated) cloud regions, the PDF constructed from this kind of a clipped image does not represent the (turbulence dominated) low column density regime of the cloud. The column density maps (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/578/A29

  8. Distinguishing dark matter from unresolved point sources in the Inner Galaxy with photon statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Samuel K.; Lisanti, Mariangela; Safdi, Benjamin R., E-mail: samuelkl@princeton.edu, E-mail: mlisanti@princeton.edu, E-mail: bsafdi@princeton.edu

    2015-05-01

    Data from the Fermi Large Area Telescope suggests that there is an extended excess of GeV gamma-ray photons in the Inner Galaxy. Identifying potential astrophysical sources that contribute to this excess is an important step in verifying whether the signal originates from annihilating dark matter. In this paper, we focus on the potential contribution of unresolved point sources, such as millisecond pulsars (MSPs). We propose that the statistics of the photons—in particular, the flux probability density function (PDF) of the photon counts below the point-source detection threshold—can potentially distinguish between the dark-matter and point-source interpretations. We calculate the flux PDFmore » via the method of generating functions for these two models of the excess. Working in the framework of Bayesian model comparison, we then demonstrate that the flux PDF can potentially provide evidence for an unresolved MSP-like point-source population.« less

  9. Efficient and robust computation of PDF features from diffusion MR signal.

    PubMed

    Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc

    2009-10-01

    We present a method for the estimation of various features of the tissue micro-architecture using the diffusion magnetic resonance imaging. The considered features are designed from the displacement probability density function (PDF). The estimation is based on two steps: first the approximation of the signal by a series expansion made of Gaussian-Laguerre and Spherical Harmonics functions; followed by a projection on a finite dimensional space. Besides, we propose to tackle the problem of the robustness to Rician noise corrupting in-vivo acquisitions. Our feature estimation is expressed as a variational minimization process leading to a variational framework which is robust to noise. This approach is very flexible regarding the number of samples and enables the computation of a large set of various features of the local tissues structure. We demonstrate the effectiveness of the method with results on both synthetic phantom and real MR datasets acquired in a clinical time-frame.

  10. Principle of maximum entropy for reliability analysis in the design of machine components

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  11. The Superstatistical Nature and Interoccurrence Time of Atmospheric Mercury Concentration Fluctuations

    NASA Astrophysics Data System (ADS)

    Carbone, F.; Bruno, A. G.; Naccarato, A.; De Simone, F.; Gencarelli, C. N.; Sprovieri, F.; Hedgecock, I. M.; Landis, M. S.; Skov, H.; Pfaffhuber, K. A.; Read, K. A.; Martin, L.; Angot, H.; Dommergue, A.; Magand, O.; Pirrone, N.

    2018-01-01

    The probability density function (PDF) of the time intervals between subsequent extreme events in atmospheric Hg0 concentration data series from different latitudes has been investigated. The Hg0 dynamic possesses a long-term memory autocorrelation function. Above a fixed threshold Q in the data, the PDFs of the interoccurrence time of the Hg0 data are well described by a Tsallis q-exponential function. This PDF behavior has been explained in the framework of superstatistics, where the competition between multiple mesoscopic processes affects the macroscopic dynamics. An extensive parameter μ, encompassing all possible fluctuations related to mesoscopic phenomena, has been identified. It follows a χ2 distribution, indicative of the superstatistical nature of the overall process. Shuffling the data series destroys the long-term memory, the distributions become independent of Q, and the PDFs collapse on to the same exponential distribution. The possible central role of atmospheric turbulence on extreme events in the Hg0 data is highlighted.

  12. Q-Space Truncation and Sampling in Diffusion Spectrum Imaging

    PubMed Central

    Tian, Qiyuan; Rokem, Ariel; Folkerth, Rebecca D.; Nummenmaa, Aapo; Fan, Qiuyun; Edlow, Brian L.; McNab, Jennifer A.

    2015-01-01

    Purpose To characterize the q-space truncation and sampling on the spin-displacement probability density function (PDF) in diffusion spectrum imaging (DSI). Methods DSI data were acquired using the MGH-USC connectome scanner (Gmax=300mT/m) with bmax=30,000s/mm2, 17×17×17, 15×15×15 and 11×11×11 grids in ex vivo human brains and bmax=10,000s/mm2, 11×11×11 grid in vivo. An additional in vivo scan using bmax=7,000s/mm2, 11×11×11 grid was performed with a derated gradient strength of 40mT/m. PDFs and orientation distribution functions (ODFs) were reconstructed with different q-space filtering and PDF integration lengths, and from down-sampled data by factors of two and three. Results Both ex vivo and in vivo data showed Gibbs ringing in PDFs, which becomes the main source of artifact in the subsequently reconstructed ODFs. For down-sampled data, PDFs interfere with the first replicas or their ringing, leading to obscured orientations in ODFs. Conclusion The minimum required q-space sampling density corresponds to a field-of-view approximately equal to twice the mean displacement distance (MDD) of the tissue. The 11×11×11 grid is suitable for both ex vivo and in vivo DSI experiments. To minimize the effects of Gibbs ringing, ODFs should be reconstructed from unfiltered q-space data with the integration length over the PDF constrained to around the MDD. PMID:26762670

  13. A novel material detection algorithm based on 2D GMM-based power density function and image detail addition scheme in dual energy X-ray images.

    PubMed

    Pourghassem, Hossein

    2012-01-01

    Material detection is a vital need in dual energy X-ray luggage inspection systems at security of airport and strategic places. In this paper, a novel material detection algorithm based on statistical trainable models using 2-Dimensional power density function (PDF) of three material categories in dual energy X-ray images is proposed. In this algorithm, the PDF of each material category as a statistical model is estimated from transmission measurement values of low and high energy X-ray images by Gaussian Mixture Models (GMM). Material label of each pixel of object is determined based on dependency probability of its transmission measurement values in the low and high energy to PDF of three material categories (metallic, organic and mixed materials). The performance of material detection algorithm is improved by a maximum voting scheme in a neighborhood of image as a post-processing stage. Using two background removing and denoising stages, high and low energy X-ray images are enhanced as a pre-processing procedure. For improving the discrimination capability of the proposed material detection algorithm, the details of the low and high energy X-ray images are added to constructed color image which includes three colors (orange, blue and green) for representing the organic, metallic and mixed materials. The proposed algorithm is evaluated on real images that had been captured from a commercial dual energy X-ray luggage inspection system. The obtained results show that the proposed algorithm is effective and operative in detection of the metallic, organic and mixed materials with acceptable accuracy.

  14. A multivariate quadrature based moment method for LES based modeling of supersonic combustion

    NASA Astrophysics Data System (ADS)

    Donde, Pratik; Koo, Heeseok; Raman, Venkat

    2012-07-01

    The transported probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of scramjet combustors. In this approach, a high-dimensional transport equation for the joint composition-enthalpy PDF needs to be solved. Quadrature based approaches provide deterministic Eulerian methods for solving the joint-PDF transport equation. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach.

  15. Bayesian assessment of uncertainty in aerosol size distributions and index of refraction retrieved from multiwavelength lidar measurements.

    PubMed

    Herman, Benjamin R; Gross, Barry; Moshary, Fred; Ahmed, Samir

    2008-04-01

    We investigate the assessment of uncertainty in the inference of aerosol size distributions from backscatter and extinction measurements that can be obtained from a modern elastic/Raman lidar system with a Nd:YAG laser transmitter. To calculate the uncertainty, an analytic formula for the correlated probability density function (PDF) describing the error for an optical coefficient ratio is derived based on a normally distributed fractional error in the optical coefficients. Assuming a monomodal lognormal particle size distribution of spherical, homogeneous particles with a known index of refraction, we compare the assessment of uncertainty using a more conventional forward Monte Carlo method with that obtained from a Bayesian posterior PDF assuming a uniform prior PDF and show that substantial differences between the two methods exist. In addition, we use the posterior PDF formalism, which was extended to include an unknown refractive index, to find credible sets for a variety of optical measurement scenarios. We find the uncertainty is greatly reduced with the addition of suitable extinction measurements in contrast to the inclusion of extra backscatter coefficients, which we show to have a minimal effect and strengthens similar observations based on numerical regularization methods.

  16. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Frankel, S. H.; Adumitroaie, V.; Sabini, G.; Madnia, C. K.

    1993-01-01

    The primary objective of this research is to extend current capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first two years of this research have been concentrated on a priori investigations of single-point Probability Density Function (PDF) methods for providing subgrid closures in reacting turbulent flows. In the efforts initiated in the third year, our primary focus has been on performing actual LES by means of PDF methods. The approach is based on assumed PDF methods and we have performed extensive analysis of turbulent reacting flows by means of LES. This includes simulations of both three-dimensional (3D) isotropic compressible flows and two-dimensional reacting planar mixing layers. In addition to these LES analyses, some work is in progress to assess the extent of validity of our assumed PDF methods. This assessment is done by making detailed companions with recent laboratory data in predicting the rate of reactant conversion in parallel reacting shear flows. This report provides a summary of our achievements for the first six months of the third year of this program.

  17. Comparisons of Lagrangian and Eulerian PDF methods in simulations of non-premixed turbulent jet flames with moderate-to-strong turbulence-chemistry interactions

    NASA Astrophysics Data System (ADS)

    Jaishree, J.; Haworth, D. C.

    2012-06-01

    Transported probability density function (PDF) methods have been applied widely and effectively for modelling turbulent reacting flows. In most applications of PDF methods to date, Lagrangian particle Monte Carlo algorithms have been used to solve a modelled PDF transport equation. However, Lagrangian particle PDF methods are computationally intensive and are not readily integrated into conventional Eulerian computational fluid dynamics (CFD) codes. Eulerian field PDF methods have been proposed as an alternative. Here a systematic comparison is performed among three methods for solving the same underlying modelled composition PDF transport equation: a consistent hybrid Lagrangian particle/Eulerian mesh (LPEM) method, a stochastic Eulerian field (SEF) method and a deterministic Eulerian field method with a direct-quadrature-method-of-moments closure (a multi-environment PDF-MEPDF method). The comparisons have been made in simulations of a series of three non-premixed, piloted methane-air turbulent jet flames that exhibit progressively increasing levels of local extinction and turbulence-chemistry interactions: Sandia/TUD flames D, E and F. The three PDF methods have been implemented using the same underlying CFD solver, and results obtained using the three methods have been compared using (to the extent possible) equivalent physical models and numerical parameters. Reasonably converged mean and rms scalar profiles are obtained using 40 particles per cell for the LPEM method or 40 Eulerian fields for the SEF method. Results from these stochastic methods are compared with results obtained using two- and three-environment MEPDF methods. The relative advantages and disadvantages of each method in terms of accuracy and computational requirements are explored and identified. In general, the results obtained from the two stochastic methods (LPEM and SEF) are very similar, and are in closer agreement with experimental measurements than those obtained using the MEPDF method, while MEPDF is the most computationally efficient of the three methods. These and other findings are discussed in detail.

  18. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Madnia, C. K.; Steinberger, C. J.; Frankel, S. H.

    1992-01-01

    The basic objective of this research is to extend the capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. In the efforts related to LES, we were primarily involved with assessing the performance of the various modern methods based on the Probability Density Function (PDF) methods for providing closures for treating the subgrid fluctuation correlations of scalar quantities in reacting turbulent flows. In the work on DNS, we concentrated on understanding some of the relevant physics of compressible reacting flows by means of statistical analysis of the data generated by DNS of such flows. In the research conducted in the second year of this program, our efforts focused on the modeling of homogeneous compressible turbulent flows by PDF methods, and on DNS of non-equilibrium reacting high speed mixing layers. Some preliminary work is also in progress on PDF modeling of shear flows, and also on LES of such flows.

  19. Towards understanding turbulent scalar mixing

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.

    1992-01-01

    In an effort towards understanding turbulent scalar mixing, we study the effect of molecular mixing, first in isolation and then by accounting for the effects of the velocity field. The chief motivation for this approach stems from the strong resemblance of the scalar probability density function (PDF) obtained from the scalar field evolving from the heat conduction equation that arises in a turbulent velocity field. However, the evolution of the scalar dissipation is different for the two cases. We attempt to account for these differences, which are due to the velocity field, using a Lagrangian frame analysis. After establishing the usefulness of this approach, we use the heat-conduction simulations (HCS), in lieu of the more expensive direct numerical simulations (DNS), to study many of the less understood aspects of turbulent mixing. Comparison between the HCS data and available models are made whenever possible. It is established that the beta PDF characterizes the evolution of the scalar PDF during mixing from all types of non-premixed initial conditions.

  20. Pressure-strain-rate events in homogeneous turbulent shear flow

    NASA Technical Reports Server (NTRS)

    Brasseur, James G.; Lee, Moon J.

    1988-01-01

    A detailed study of the intercomponent energy transfer processes by the pressure-strain-rate in homogeneous turbulent shear flow is presented. Probability density functions (pdf's) and contour plots of the rapid and slow pressure-strain-rate show that the energy transfer processes are extremely peaky, with high-magnitude events dominating low-magnitude fluctuations, as reflected by very high flatness factors of the pressure-strain-rate. A concept of the energy transfer class was applied to investigate details of the direction as well as magnitude of the energy transfer processes. In incompressible flow, six disjoint energy transfer classes exist. Examination of contours in instantaneous fields, pdf's and weighted pdf's of the pressure-strain-rate indicates that in the low magnitude regions all six classes play an important role, but in the high magnitude regions four classes of transfer processes, dominate. The contribution to the average slow pressure-strain-rate from the high magnitude fluctuations is only 50 percent or less. The relative significance of high and low magnitude transfer events is discussed.

  1. A Novel Strategy for Numerical Simulation of High-speed Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Sheikhi, M. R. H.; Drozda, T. G.; Givi, P.

    2003-01-01

    The objective of this research is to improve and implement the filtered mass density function (FDF) methodology for large eddy simulation (LES) of high-speed reacting turbulent flows. We have just completed Year 1 of this research. This is the Final Report on our activities during the period: January 1, 2003 to December 31, 2003. 2002. In the efforts during the past year, LES is conducted of the Sandia Flame D, which is a turbulent piloted nonpremixed methane jet flame. The subgrid scale (SGS) closure is based on the scalar filtered mass density function (SFMDF) methodology. The SFMDF is basically the mass weighted probability density function (PDF) of the SGS scalar quantities. For this flame (which exhibits little local extinction), a simple flamelet model is used to relate the instantaneous composition to the mixture fraction. The modelled SFMDF transport equation is solved by a hybrid finite-difference/Monte Carlo scheme.

  2. A composition joint PDF method for the modeling of spray flames

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1995-01-01

    This viewgraph presentation discusses an extension of the probability density function (PDF) method to the modeling of spray flames to evaluate the limitations and capabilities of this method in the modeling of gas-turbine combustor flows. The comparisons show that the general features of the flowfield are correctly predicted by the present solution procedure. The present solution appears to provide a better representation of the temperature field, particularly, in the reverse-velocity zone. The overpredictions in the centerline velocity could be attributed to the following reasons: (1) the use of k-epsilon turbulence model is known to be less precise in highly swirling flows and (2) the swirl number used here is reported to be estimated rather than measured.

  3. Decaying two-dimensional turbulence in a circular container.

    PubMed

    Schneider, Kai; Farge, Marie

    2005-12-09

    We present direct numerical simulations of two-dimensional decaying turbulence at initial Reynolds number 5 x 10(4) in a circular container with no-slip boundary conditions. Starting with random initial conditions the flow rapidly exhibits self-organization into coherent vortices. We study their formation and the role of the viscous boundary layer on the production and decay of integral quantities. The no-slip wall produces vortices which are injected into the bulk flow and tend to compensate the enstrophy dissipation. The self-organization of the flow is reflected by the transition of the initially Gaussian vorticity probability density function (PDF) towards a distribution with exponential tails. Because of the presence of coherent vortices the pressure PDF become strongly skewed with exponential tails for negative values.

  4. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wu, Keyi; Li, Jinglai

    2016-09-01

    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.

  5. The neuropeptide PDF acts directly on evening pacemaker neurons to regulate multiple features of circadian behavior.

    PubMed

    Lear, Bridget C; Zhang, Luoying; Allada, Ravi

    2009-07-01

    Discrete clusters of circadian clock neurons temporally organize daily behaviors such as sleep and wake. In Drosophila, a network of just 150 neurons drives two peaks of timed activity in the morning and evening. A subset of these neurons expresses the neuropeptide pigment dispersing factor (PDF), which is important for promoting morning behavior as well as maintaining robust free-running rhythmicity in constant conditions. Yet, how PDF acts on downstream circuits to mediate rhythmic behavior is unknown. Using circuit-directed rescue of PDF receptor mutants, we show that PDF targeting of just approximately 30 non-PDF evening circadian neurons is sufficient to drive morning behavior. This function is not accompanied by large changes in core molecular oscillators in light-dark, indicating that PDF RECEPTOR likely regulates the output of these cells under these conditions. We find that PDF also acts on this focused set of non-PDF neurons to regulate both evening activity phase and period length, consistent with modest resetting effects on core oscillators. PDF likely acts on more distributed pacemaker neuron targets, including the PDF neurons themselves, to regulate rhythmic strength. Here we reveal defining features of the circuit-diagram for PDF peptide function in circadian behavior, revealing the direct neuronal targets of PDF as well as its behavioral functions at those sites. These studies define a key direct output circuit sufficient for multiple PDF dependent behaviors.

  6. A biomechanical model for fibril recruitment: Evaluation in tendons and arteries.

    PubMed

    Bevan, Tim; Merabet, Nadege; Hornsby, Jack; Watton, Paul N; Thompson, Mark S

    2018-06-06

    Simulations of soft tissue mechanobiological behaviour are increasingly important for clinical prediction of aneurysm, tendinopathy and other disorders. Mechanical behaviour at low stretches is governed by fibril straightening, transitioning into load-bearing at recruitment stretch, resulting in a tissue stiffening effect. Previous investigations have suggested theoretical relationships between stress-stretch measurements and recruitment probability density function (PDF) but not derived these rigorously nor evaluated these experimentally. Other work has proposed image-based methods for measurement of recruitment but made use of arbitrary fibril critical straightness parameters. The aim of this work was to provide a sound theoretical basis for estimating recruitment PDF from stress-stretch measurements and to evaluate this relationship using image-based methods, clearly motivating the choice of fibril critical straightness parameter in rat tail tendon and porcine artery. Rigorous derivation showed that the recruitment PDF may be estimated from the second stretch derivative of the first Piola-Kirchoff tissue stress. Image-based fibril recruitment identified the fibril straightness parameter that maximised Pearson correlation coefficients (PCC) with estimated PDFs. Using these critical straightness parameters the new method for estimating recruitment PDF showed a PCC with image-based measures of 0.915 and 0.933 for tendons and arteries respectively. This method may be used for accurate estimation of fibril recruitment PDF in mechanobiological simulation where fibril-level mechanical parameters are important for predicting cell behaviour. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. On time-dependent diffusion coefficients arising from stochastic processes with memory

    NASA Astrophysics Data System (ADS)

    Carpio-Bernido, M. Victoria; Barredo, Wilson I.; Bernido, Christopher C.

    2017-08-01

    Time-dependent diffusion coefficients arise from anomalous diffusion encountered in many physical systems such as protein transport in cells. We compare these coefficients with those arising from analysis of stochastic processes with memory that go beyond fractional Brownian motion. Facilitated by the Hida white noise functional integral approach, diffusion propagators or probability density functions (pdf) are obtained and shown to be solutions of modified diffusion equations with time-dependent diffusion coefficients. This should be useful in the study of complex transport processes.

  8. Of pacemakers and statistics: the actuarial method extended.

    PubMed

    Dussel, J; Wolbarst, A B; Scott-Millar, R N; Obel, I W

    1980-01-01

    Pacemakers cease functioning because of either natural battery exhaustion (nbe) or component failure (cf). A study of four series of pacemakers shows that a simple extension of the actuarial method, so as to incorporate Normal statistics, makes possible a quantitative differentiation between the two modes of failure. This involves the separation of the overall failure probability density function PDF(t) into constituent parts pdfnbe(t) and pdfcf(t). The approach should allow a meaningful comparison of the characteristics of different pacemaker types.

  9. On statistical properties of traded volume in financial markets

    NASA Astrophysics Data System (ADS)

    de Souza, J.; Moyano, L. G.; Duarte Queirós, S. M.

    2006-03-01

    In this article we study the dependence degree of the traded volume of the Dow Jones 30 constituent equities by using a nonextensive generalised form of the Kullback-Leibler information measure. Our results show a slow decay of the dependence degree as a function of the lag. This feature is compatible with the existence of non-linearities in this type time series. In addition, we introduce a dynamical mechanism whose associated stationary probability density function (PDF) presents a good agreement with the empirical results.

  10. Evaluation of the reproducibility of lung motion probability distribution function (PDF) using dynamic MRI.

    PubMed

    Cai, Jing; Read, Paul W; Altes, Talissa A; Molloy, Janelle A; Brookeman, James R; Sheng, Ke

    2007-01-21

    Treatment planning based on probability distribution function (PDF) of patient geometries has been shown a potential off-line strategy to incorporate organ motion, but the application of such approach highly depends upon the reproducibility of the PDF. In this paper, we investigated the dependences of the PDF reproducibility on the imaging acquisition parameters, specifically the scan time and the frame rate. Three healthy subjects underwent a continuous 5 min magnetic resonance (MR) scan in the sagittal plane with a frame rate of approximately 10 f s-1, and the experiments were repeated with an interval of 2 to 3 weeks. A total of nine pulmonary vessels from different lung regions (upper, middle and lower) were tracked and the dependences of their displacement PDF reproducibility were evaluated as a function of scan time and frame rate. As results, the PDF reproducibility error decreased with prolonged scans and appeared to approach equilibrium state in subjects 2 and 3 within the 5 min scan. The PDF accuracy increased in the power function with the increase of frame rate; however, the PDF reproducibility showed less sensitivity to frame rate presumably due to the randomness of breathing which dominates the effects. As the key component of the PDF-based treatment planning, the reproducibility of the PDF affects the dosimetric accuracy substantially. This study provides a reference for acquiring MR-based PDF of structures in the lung.

  11. A transformed path integral approach for solution of the Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Subramaniam, Gnana M.; Vedula, Prakash

    2017-10-01

    A novel path integral (PI) based method for solution of the Fokker-Planck equation is presented. The proposed method, termed the transformed path integral (TPI) method, utilizes a new formulation for the underlying short-time propagator to perform the evolution of the probability density function (PDF) in a transformed computational domain where a more accurate representation of the PDF can be ensured. The new formulation, based on a dynamic transformation of the original state space with the statistics of the PDF as parameters, preserves the non-negativity of the PDF and incorporates short-time properties of the underlying stochastic process. New update equations for the state PDF in a transformed space and the parameters of the transformation (including mean and covariance) that better accommodate nonlinearities in drift and non-Gaussian behavior in distributions are proposed (based on properties of the SDE). Owing to the choice of transformation considered, the proposed method maps a fixed grid in transformed space to a dynamically adaptive grid in the original state space. The TPI method, in contrast to conventional methods such as Monte Carlo simulations and fixed grid approaches, is able to better represent the distributions (especially the tail information) and better address challenges in processes with large diffusion, large drift and large concentration of PDF. Additionally, in the proposed TPI method, error bounds on the probability in the computational domain can be obtained using the Chebyshev's inequality. The benefits of the TPI method over conventional methods are illustrated through simulations of linear and nonlinear drift processes in one-dimensional and multidimensional state spaces. The effects of spatial and temporal grid resolutions as well as that of the diffusion coefficient on the error in the PDF are also characterized.

  12. Monte Carlo Method for Determining Earthquake Recurrence Parameters from Short Paleoseismic Catalogs: Example Calculations for California

    USGS Publications Warehouse

    Parsons, Tom

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  13. Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-04-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.

  14. Pdf - Transport equations for chemically reacting flows

    NASA Technical Reports Server (NTRS)

    Kollmann, W.

    1989-01-01

    The closure problem for the transport equations for pdf and the characteristic functions of turbulent, chemically reacting flows is addressed. The properties of the linear and closed equations for the characteristic functional for Eulerian and Lagrangian variables are established, and the closure problem for the finite-dimensional case is discussed for pdf and characteristic functions. It is shown that the closure for the scalar dissipation term in the pdf equation developed by Dopazo (1979) and Kollmann et al. (1982) results in a single integral, in contrast to the pdf, where double integration is required. Some recent results using pdf methods obtained for turbulent flows with combustion, including effects of chemical nonequilibrium, are discussed.

  15. Coupled Monte Carlo Probability Density Function/ SPRAY/CFD Code Developed for Modeling Gas-Turbine Combustor Flows

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The success of any solution methodology for studying gas-turbine combustor flows depends a great deal on how well it can model various complex, rate-controlling processes associated with turbulent transport, mixing, chemical kinetics, evaporation and spreading rates of the spray, convective and radiative heat transfer, and other phenomena. These phenomena often strongly interact with each other at disparate time and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. Turbulence manifests its influence in a diffusion flame in several forms depending on how turbulence interacts with various flame scales. These forms range from the so-called wrinkled, or stretched, flamelets regime, to the distributed combustion regime. Conventional turbulence closure models have difficulty in treating highly nonlinear reaction rates. A solution procedure based on the joint composition probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices such as extinction, blowoff limits, and emissions predictions because it can handle the nonlinear chemical reaction rates without any approximation. In this approach, mean and turbulence gas-phase velocity fields are determined from a standard turbulence model; the joint composition field of species and enthalpy are determined from the solution of a modeled PDF transport equation; and a Lagrangian-based dilute spray model is used for the liquid-phase representation with appropriate consideration of the exchanges of mass, momentum, and energy between the two phases. The PDF transport equation is solved by a Monte Carlo method, and existing state-of-the-art numerical representations are used to solve the mean gasphase velocity and turbulence fields together with the liquid-phase equations. The joint composition PDF approach was extended in our previous work to the study of compressible reacting flows. The application of this method to several supersonic diffusion flames associated with scramjet combustor flow fields provided favorable comparisons with the available experimental data. A further extension of this approach to spray flames, three-dimensional computations, and parallel computing was reported in a recent paper. The recently developed PDF/SPRAY/computational fluid dynamics (CFD) module combines the novelty of the joint composition PDF approach with the ability to run on parallel architectures. This algorithm was implemented on the NASA Lewis Research Center's Cray T3D, a massively parallel computer with an aggregate of 64 processor elements. The calculation procedure was applied to predict the flow properties of both open and confined swirl-stabilized spray flames.

  16. Conditional probability distribution function of "energy transfer rate" (PDF(ɛ|PVI)) as compared with its counterpart of temperature (PDF(T|PVI)) at the same condition of fluctuation

    NASA Astrophysics Data System (ADS)

    He, Jiansen; Wang, Yin; Pei, Zhongtian; Zhang, Lei; Tu, Chuanyi

    2017-04-01

    Energy transfer rate of turbulence is not uniform everywhere but suggested to follow a certain distribution, e.g., lognormal distribution (Kolmogorov 1962). The inhomogeneous transfer rate leads to emergence of intermittency, which may be identified with some parameter, e.g., normalized partial variance increments (PVI) (Greco et al., 2009). Large PVI of magnetic field fluctuations are found to have a temperature distribution with the median and mean values higher than that for small PVI level (Osman et al., 2012). However, there is a large proportion of overlap between temperature distributions associated with the smaller and larger PVIs. So it is recognized that only PVI cannot fully determine the temperature, since the one-to-one mapping relationship does not exist. One may be curious about the reason responsible for the considerable overlap of conditional temperature distribution for different levels of PVI. Usually the hotter plasma with higher temperature is speculated to be heated more with more dissipation of turbulence energy corresponding to more energy cascading rate, if the temperature fluctuation of the eigen wave mode is not taken into account. To explore the statistical relationship between turbulence cascading and plasma thermal state, we aim to study and reveal, for the first time, the conditional probability function of "energy transfer rate" under different levels of PVI condition (PDF(ɛ|PVI)), and compare it with the conditional probability function of temperature. The conditional probability distribution function, PDF(ɛ|PVI), is derived from PDF(PVI|ɛ)·PDF(ɛ)/PDF(PVI) according to the Bayesian theorem. PDF(PVI) can be obtained directly from the data. PDF(ɛ) is derived from the conjugate-gradient inversion of PDF(PVI) by assuming reasonably that PDF(δB|σ) is a Gaussian distribution, where PVI=|δB|/ σ and σ ( ɛι)1/3. PDF(ɛ) can also be acquired from fitting PDF(δB) with an integral function ∫PDF(δB|σ)PDF(σ)d σ. As a result, PDF(ɛ|PVI) is found to shift to higher median value of ɛ with increasing PVI but with a significant overlap of PDFs for different PVIs. Therefore, PDF(ɛ|PVI) is similar to PDF(T|PVI) in the sense of slow migration along with increasing PVI. The detailed comparison between these two conditional PDFs are also performed.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    It is recognized that some dynamic and noise environments are characterized by time histories which are not Gaussian. An example is high intensity acoustic noise. Another example is some transportation vibration. A better simulation of these environments can be generated if a zero mean non-Gaussian time history can be reproduced with a specified auto (or power) spectral density (ASD or PSD) and a specified probability density function (pdf). After the required time history is synthesized, the waveform can be used for simulation purposes. For example, modem waveform reproduction techniques can be used to reproduce the waveform on electrodynamic or electrohydraulicmore » shakers. Or the waveforms can be used in digital simulations. A method is presented for the generation of realizations of zero mean non-Gaussian random time histories with a specified ASD, and pdf. First a Gaussian time history with the specified auto (or power) spectral density (ASD) is generated. A monotonic nonlinear function relating the Gaussian waveform to the desired realization is then established based on the Cumulative Distribution Function (CDF) of the desired waveform and the known CDF of a Gaussian waveform. The established function is used to transform the Gaussian waveform to a realization of the desired waveform. Since the transformation preserves the zero-crossings and peaks of the original Gaussian waveform, and does not introduce any substantial discontinuities, the ASD is not substantially changed. Several methods are available to generate a realization of a Gaussian distributed waveform with a known ASD. The method of Smallwood and Paez (1993) is an example. However, the generation of random noise with a specified ASD but with a non-Gaussian distribution is less well known.« less

  18. Back in the saddle: large-deviation statistics of the cosmic log-density field

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.

    2016-08-01

    We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.

  19. Local Renyi entropic profiles of DNA sequences.

    PubMed

    Vinga, Susana; Almeida, Jonas S

    2007-10-16

    In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at http://kdbio.inesc-id.pt/~svinga/ep/. The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures.

  20. Local Renyi entropic profiles of DNA sequences

    PubMed Central

    Vinga, Susana; Almeida, Jonas S

    2007-01-01

    Background In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. Results The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at . Conclusion The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures. PMID:17939871

  1. Dual PDF signaling pathways reset clocks via TIMELESS and acutely excite target neurons to control circadian behavior.

    PubMed

    Seluzicki, Adam; Flourakis, Matthieu; Kula-Eversole, Elzbieta; Zhang, Luoying; Kilman, Valerie; Allada, Ravi

    2014-03-01

    Molecular circadian clocks are interconnected via neural networks. In Drosophila, PIGMENT-DISPERSING FACTOR (PDF) acts as a master network regulator with dual functions in synchronizing molecular oscillations between disparate PDF(+) and PDF(-) circadian pacemaker neurons and controlling pacemaker neuron output. Yet the mechanisms by which PDF functions are not clear. We demonstrate that genetic inhibition of protein kinase A (PKA) in PDF(-) clock neurons can phenocopy PDF mutants while activated PKA can partially rescue PDF receptor mutants. PKA subunit transcripts are also under clock control in non-PDF DN1p neurons. To address the core clock target of PDF, we rescued per in PDF neurons of arrhythmic per⁰¹ mutants. PDF neuron rescue induced high amplitude rhythms in the clock component TIMELESS (TIM) in per-less DN1p neurons. Complete loss of PDF or PKA inhibition also results in reduced TIM levels in non-PDF neurons of per⁰¹ flies. To address how PDF impacts pacemaker neuron output, we focally applied PDF to DN1p neurons and found that it acutely depolarizes and increases firing rates of DN1p neurons. Surprisingly, these effects are reduced in the presence of an adenylate cyclase inhibitor, yet persist in the presence of PKA inhibition. We have provided evidence for a signaling mechanism (PKA) and a molecular target (TIM) by which PDF resets and synchronizes clocks and demonstrates an acute direct excitatory effect of PDF on target neurons to control neuronal output. The identification of TIM as a target of PDF signaling suggests it is a multimodal integrator of cell autonomous clock, environmental light, and neural network signaling. Moreover, these data reveal a bifurcation of PKA-dependent clock effects and PKA-independent output effects. Taken together, our results provide a molecular and cellular basis for the dual functions of PDF in clock resetting and pacemaker output.

  2. Dual PDF Signaling Pathways Reset Clocks Via TIMELESS and Acutely Excite Target Neurons to Control Circadian Behavior

    PubMed Central

    Seluzicki, Adam; Flourakis, Matthieu; Kula-Eversole, Elzbieta; Zhang, Luoying; Kilman, Valerie; Allada, Ravi

    2014-01-01

    Molecular circadian clocks are interconnected via neural networks. In Drosophila, PIGMENT-DISPERSING FACTOR (PDF) acts as a master network regulator with dual functions in synchronizing molecular oscillations between disparate PDF(+) and PDF(−) circadian pacemaker neurons and controlling pacemaker neuron output. Yet the mechanisms by which PDF functions are not clear. We demonstrate that genetic inhibition of protein kinase A (PKA) in PDF(−) clock neurons can phenocopy PDF mutants while activated PKA can partially rescue PDF receptor mutants. PKA subunit transcripts are also under clock control in non-PDF DN1p neurons. To address the core clock target of PDF, we rescued per in PDF neurons of arrhythmic per01 mutants. PDF neuron rescue induced high amplitude rhythms in the clock component TIMELESS (TIM) in per-less DN1p neurons. Complete loss of PDF or PKA inhibition also results in reduced TIM levels in non-PDF neurons of per01 flies. To address how PDF impacts pacemaker neuron output, we focally applied PDF to DN1p neurons and found that it acutely depolarizes and increases firing rates of DN1p neurons. Surprisingly, these effects are reduced in the presence of an adenylate cyclase inhibitor, yet persist in the presence of PKA inhibition. We have provided evidence for a signaling mechanism (PKA) and a molecular target (TIM) by which PDF resets and synchronizes clocks and demonstrates an acute direct excitatory effect of PDF on target neurons to control neuronal output. The identification of TIM as a target of PDF signaling suggests it is a multimodal integrator of cell autonomous clock, environmental light, and neural network signaling. Moreover, these data reveal a bifurcation of PKA-dependent clock effects and PKA-independent output effects. Taken together, our results provide a molecular and cellular basis for the dual functions of PDF in clock resetting and pacemaker output. PMID:24643294

  3. Cylinders out of a top hat: counts-in-cells for projected densities

    NASA Astrophysics Data System (ADS)

    Uhlemann, Cora; Pichon, Christophe; Codis, Sandrine; L'Huillier, Benjamin; Kim, Juhan; Bernardeau, Francis; Park, Changbom; Prunet, Simon

    2018-06-01

    Large deviation statistics is implemented to predict the statistics of cosmic densities in cylinders applicable to photometric surveys. It yields few per cent accurate analytical predictions for the one-point probability distribution function (PDF) of densities in concentric or compensated cylinders; and also captures the density dependence of their angular clustering (cylinder bias). All predictions are found to be in excellent agreement with the cosmological simulation Horizon Run 4 in the quasi-linear regime where standard perturbation theory normally breaks down. These results are combined with a simple local bias model that relates dark matter and tracer densities in cylinders and validated on simulated halo catalogues. This formalism can be used to probe cosmology with existing and upcoming photometric surveys like DES, Euclid or WFIRST containing billions of galaxies.

  4. Local structure and lattice dynamics study of low dimensional materials using atomic pair distribution function and high energy resolution inelastic x-ray scattering

    NASA Astrophysics Data System (ADS)

    Shi, Chenyang

    Structure and dynamics lie at the heart of the materials science. A detailed knowledge of both subjects would be foundational in understanding the materials' properties and predicting their potential applications. However, the task becomes increasingly dicult as the particle size is reduced to the nanometer scale. For nanostructured materials their laboratory x-ray scattering patterns are overlapped and broadened, making structure determination impossible. Atomic pair distribution function technique based on either synchrotron x-ray or neutron scattering data is known as the tool of choice for probing local structures. However, to solve the "structure problem" in low-dimensional materials with PDF is still challenging. For example for 2D materials of interest in this thesis the crystallographic modeling approach often yields unphysical thermal factors along stacking direction where new chemical intuitions about their actual structures and new modeling methodology/program are needed. Beyond this, lattice dynamical investigations on nanosized particles are extremely dicult. Laboratory tools such as Raman and infra-red only probe phonons at Brillouin zone center. Although in literature there are a great number of theoretical studies of their vibrational properties based on either empirical force elds or density functional theory, various approximations made in theories make the theoretical predictions less reliable. Also, there lacks the direct experiment result to validate the theory against. In this thesis, we studied the structure and dynamics of a wide variety of technologically relevant low-dimensional materials through synchrotron based x-ray PDF and high energy resolution inelastic x-ray scattering (HERIX) techniques. By collecting PDF data and employing advanced modeling program such as DiPy-CMI, we successfully determined the atomic structures of (i) emerging Ti3C2, Nb4C3 MXenes (transition metal carbides and/or nitrides) that are promising for energy storage applications, and of (ii) zirconium phenylphosphonate ion exchange materials that are proposed to separate lanthanide ions from actinide ions in nuclear waste. Both material systems have two-dimensional layered nanocrystalline structure where we observed that the stacking of layers are not in good registry, also known as turbostratic" disorder. Consequently the signals from a single layer of atoms dominate the experimental PDF{thus building up a single slab model and simulating PDF using Debye function analysis was sucient to capture the main structural features in the measured PDF data. The information on correlation length of layers along the stacking direction, however, is contained in low-Q diraction peaks in either laboratory x-ray or synchrotron x-ray scattering patterns. On the lattice dynamics side, we rst investigated the trend of atomic bonding strength in size dependent platinum nanoparticles based on temperature dependent PDF data and measured Debye temperatures. An anomalous bond softening was observed at a particle size less than 2 nm. Since Debye model gives a simple quadratic phonon density of states (PDOS) curve, which is a simplified version of real lattice dynamics, we are motivated to measure full PDOS curves on three CdSe nanoclusters by using non-resonant inelastic x-ray scattering technique. We observed an overall blue-shift of PDOS curves with decreased sizes. Our current exemplary studies will open the door to a large number of future structural and lattice dynamical studies on a much broader range of low-dimensional material systems.

  5. Joint resonant CMB power spectrum and bispectrum estimation

    NASA Astrophysics Data System (ADS)

    Meerburg, P. Daniel; Münchmeyer, Moritz; Wandelt, Benjamin

    2016-02-01

    We develop the tools necessary to assess the statistical significance of resonant features in the CMB correlation functions, combining power spectrum and bispectrum measurements. This significance is typically addressed by running a large number of simulations to derive the probability density function (PDF) of the feature-amplitude in the Gaussian case. Although these simulations are tractable for the power spectrum, for the bispectrum they require significant computational resources. We show that, by assuming that the PDF is given by a multivariate Gaussian where the covariance is determined by the Fisher matrix of the sine and cosine terms, we can efficiently produce spectra that are statistically close to those derived from full simulations. By drawing a large number of spectra from this PDF, both for the power spectrum and the bispectrum, we can quickly determine the statistical significance of candidate signatures in the CMB, considering both single frequency and multifrequency estimators. We show that for resonance models, cosmology and foreground parameters have little influence on the estimated amplitude, which allows us to simplify the analysis considerably. A more precise likelihood treatment can then be applied to candidate signatures only. We also discuss a modal expansion approach for the power spectrum, aimed at quickly scanning through large families of oscillating models.

  6. Generating log-normal mock catalog of galaxies in redshift space

    NASA Astrophysics Data System (ADS)

    Agrawal, Aniket; Makiya, Ryu; Chiang, Chi-Ting; Jeong, Donghui; Saito, Shun; Komatsu, Eiichiro

    2017-10-01

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.

  7. Intermittency in generalized NLS equation with focusing six-wave interactions

    NASA Astrophysics Data System (ADS)

    Agafontsev, D. S.; Zakharov, V. E.

    2015-10-01

    We study numerically the statistics of waves for generalized one-dimensional Nonlinear Schrödinger (NLS) equation that takes into account focusing six-wave interactions, dumping and pumping terms. We demonstrate the universal behavior of this system for the region of parameters when six-wave interactions term affects significantly only the largest waves. In particular, in the statistically steady state of this system the probability density function (PDF) of wave amplitudes turns out to be strongly non-Rayleigh one for large waves, with characteristic "fat tail" decaying with amplitude | Ψ | close to ∝ exp ⁡ (- γ | Ψ |), where γ > 0 is constant. The corresponding non-Rayleigh addition to the PDF indicates strong intermittency, vanishes in the absence of six-wave interactions, and increases with six-wave coupling coefficient.

  8. Radar cross section models for limited aspect angle windows

    NASA Astrophysics Data System (ADS)

    Robinson, Mark C.

    1992-12-01

    This thesis presents a method for building Radar Cross Section (RCS) models of aircraft based on static data taken from limited aspect angle windows. These models statistically characterize static RCS. This is done to show that a limited number of samples can be used to effectively characterize static aircraft RCS. The optimum models are determined by performing both a Kolmogorov and a Chi-Square goodness-of-fit test comparing the static RCS data with a variety of probability density functions (pdf) that are known to be effective at approximating the static RCS of aircraft. The optimum parameter estimator is also determined by the goodness of-fit tests if there is a difference in pdf parameters obtained by the Maximum Likelihood Estimator (MLE) and the Method of Moments (MoM) estimators.

  9. Statistical properties of business firms structure and growth

    NASA Astrophysics Data System (ADS)

    Matia, K.; Fu, Dongfeng; Buldyrev, S. V.; Pammolli, F.; Riccaboni, M.; Stanley, H. E.

    2004-08-01

    We analyze a database comprising quarterly sales of 55624 pharmaceutical products commercialized by 3939 pharmaceutical firms in the period 1992 2001. We study the probability density function (PDF) of growth in firms and product sales and find that the width of the PDF of growth decays with the sales as a power law with exponent β = 0.20 ± 0.01. We also find that the average sales of products scales with the firm sales as a power law with exponent α = 0.57 ± 0.02. And that the average number products of a firm scales with the firm sales as a power law with exponent γ = 0.42 ± 0.02. We compare these findings with the predictions of models proposed till date on growth of business firms.

  10. Dynamical Epidemic Suppression Using Stochastic Prediction and Control

    DTIC Science & Technology

    2004-10-28

    initial probability density function (PDF), p: D C R2 -- R, is defined by the stochastic Frobenius - Perron For deterministic systems, normal methods of...induced chaos. To analyze the qualitative change, we apply the technique of the stochastic Frobenius - Perron operator [L. Billings et al., Phys. Rev. Lett...transition matrix describing the probability of transport from one region of phase space to another, which approximates the stochastic Frobenius - Perron

  11. Design of a High Intensity Turbulent Combustion System

    DTIC Science & Technology

    2015-05-01

    nth repetition of a turbulent-flow experiment. [1] .................... 8 Figure 2. 3: Velocity measurement on the n th repetition of a turbulent-flow...measurement on the n th repetition of a turbulent-flow experiment. u(t) = U + u’(t...event such as P ≈ [ U < N ms-1 ]. The random variable U can be characterized by its probability density function (PDF). The probability of an event

  12. Estimation of laser beam pointing parameters in the presence of atmospheric turbulence.

    PubMed

    Borah, Deva K; Voelz, David G

    2007-08-10

    The problem of estimating mechanical boresight and jitter performance of a laser pointing system in the presence of atmospheric turbulence is considered. A novel estimator based on maximizing an average probability density function (pdf) of the received signal is presented. The proposed estimator uses a Gaussian far-field mean irradiance profile, and the irradiance pdf is assumed to be lognormal. The estimates are obtained using a sequence of return signal values from the intended target. Alternatively, one can think of the estimates being made by a cooperative target using the received signal samples directly. The estimator does not require sample-to-sample atmospheric turbulence parameter information. The approach is evaluated using wave optics simulation for both weak and strong turbulence conditions. Our results show that very good boresight and jitter estimation performance can be obtained under the weak turbulence regime. We also propose a novel technique to include the effect of very low received intensity values that cannot be measured well by the receiving device. The proposed technique provides significant improvement over a conventional approach where such samples are simply ignored. Since our method is derived from the lognormal irradiance pdf, the performance under strong turbulence is degraded. However, the ideas can be extended with appropriate pdf models to obtain more accurate results under strong turbulence conditions.

  13. Uncertainty propagation for statistical impact prediction of space debris

    NASA Astrophysics Data System (ADS)

    Hoogendoorn, R.; Mooij, E.; Geul, J.

    2018-01-01

    Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.

  14. Computations of steady-state and transient premixed turbulent flames using pdf methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hulek, T.; Lindstedt, R.P.

    1996-03-01

    Premixed propagating turbulent flames are modeled using a one-point, single time, joint velocity-composition probability density function (pdf) closure. The pdf evolution equation is solved using a Monte Carlo method. The unclosed terms in the pdf equation are modeled using a modified version of the binomial Langevin model for scalar mixing of Valino and Dopazo, and the Haworth and Pope (HP) and Lagrangian Speziale-Sarkar-Gatski (LSSG) models for the viscous dissipation of velocity and the fluctuating pressure gradient. The source terms for the presumed one-step chemical reaction are extracted from the rate of fuel consumption in laminar premixed hydrocarbon flames, computed usingmore » a detailed chemical kinetic mechanism. Steady-state and transient solutions are obtained for planar turbulent methane-air and propane-air flames. The transient solution method features a coupling with a Finite Volume (FV) code to obtain the mean pressure field. The results are compared with the burning velocity measurements of Abdel-Gayed et al. and with velocity measurements obtained in freely propagating propane-air flames by Videto and Santavicca. The effects of different upstream turbulence fields, chemical source terms (different fuels and strained/unstrained laminar flames) and the influence of the velocity statistics models (HP and LSSG) are assessed.« less

  15. An effective inversion algorithm for retrieving bimodal aerosol particle size distribution from spectral extinction data

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming

    2014-12-01

    The Ant Colony Optimization algorithm based on the probability density function (PDF-ACO) is applied to estimate the bimodal aerosol particle size distribution (PSD). The direct problem is solved by the modified Anomalous Diffraction Approximation (ADA, as an approximation for optically large and soft spheres, i.e., χ⪢1 and |m-1|⪡1) and the Beer-Lambert law. First, a popular bimodal aerosol PSD and three other bimodal PSDs are retrieved in the dependent model by the multi-wavelength extinction technique. All the results reveal that the PDF-ACO algorithm can be used as an effective technique to investigate the bimodal PSD. Then, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution function to retrieve the bimodal PSDs under the independent model. Finally, the J-SB and M-β functions are applied to recover actual measurement aerosol PSDs over Beijing and Shanghai obtained from the aerosol robotic network (AERONET). The numerical simulation and experimental results demonstrate that these two general functions, especially the J-SB function, can be used as a versatile distribution function to retrieve the bimodal aerosol PSD when no priori information about the PSD is available.

  16. Buffer-dependent regulation of aquaporin-1 expression and function in human peritoneal mesothelial cells.

    PubMed

    Zhai, Yihui; Bloch, Jacek; Hömme, Meike; Schaefer, Julia; Hackert, Thilo; Philippin, Bärbel; Schwenger, Vedat; Schaefer, Franz; Schmitt, Claus P

    2012-07-01

    Biocompatible peritoneal dialysis fluids (PDF) are buffered with lactate and/or bicarbonate. We hypothesized that the reduced toxicity of the biocompatible solutions might unmask specific effects of the buffer type on mesothelial cell functions. Human peritoneal mesothelial cells (HPMC) were incubated with bicarbonate (B-)PDF or lactate-buffered (L-)PDF followed by messenger RNA (mRNA) and protein analysis. Gene silencing was achieved using small interfering RNA (siRNA), functional studies using Transwell culture systems, and monolayer wound-healing assays. Incubation with B-PDF increased HPMC migration in the Transwell and monolayer wound-healing assay to 245 ± 99 and 137 ± 11% compared with L-PDF. Gene silencing showed this effect to be entirely dependent on the expression of aquaporin-1 (AQP-1) and independent of AQP-3. Exposure of HPMC to B-PDF increased AQP-1 mRNA and protein abundance to 209  ± 80 and 197  ±  60% of medium control; the effect was pH dependent. L-PDF reduced AQP-1 mRNA. Addition of bicarbonate to L-PDF increased AQP-1 abundance by threefold; mRNA half-life remained unchanged. Immunocytochemistry confirmed opposite changes of AQP-1 cell-membrane abundance with B-PDF and L-PDF. Peritoneal mesothelial AQP-1 abundance and migration capacity is regulated by pH and buffer agents used in PD solutions. In vivo studies are required to delineate the impact with respect to long-term peritoneal membrane integrity and function.

  17. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    NASA Astrophysics Data System (ADS)

    De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles

    2017-09-01

    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.

  18. Methylation of zebularine: a quantum mechanical study incorporating interactive 3D pdf graphs.

    PubMed

    Selvam, Lalitha; Vasilyev, Vladislav; Wang, Feng

    2009-08-20

    Methylation of a cytidine deaminase inhibitor, 1-(beta-D-ribofuranosyl)-2-pyrimidone (i.e., zebularine (zeb)), which produces 1-(beta-D-ribofuranosyl)-5-methyl-2-pyrimidinone (d5), has been investigated using density functional theory models. The optimized structures of zeb and d5 and the valence orbitals primarily responsible for the methylation in d5 are presented using state-of-the-art interactive (on a computer or online) three-dimensional (3D) graphics in a portable document format (pdf) file, 3D-PDF (http://www.web3d.org/x3d/vrml/ ). The facility to embed 3D molecular structures into pdf documents has been developed jointly at Swinburne University of Technology and the National Computational Infrastructure, the Australian National University. The methyl fragment in the base moiety shows little effect on the sugar puckering but apparently affects anisotropic properties, such as condensed Fukui functions. Binding energy spectra, both valence space and core space, are noticeably affected; in particular, in the outer-valence space (e.g., IP < 20 eV). The methyl fragment delocalizes and diffuses into almost all valence space, but orbitals 8 (57a, IP = 12.57 eV), 18 (47a, IP = 14.70 eV), and 37 (28a, IP = 22.15 eV) are identified as fingerprint for the methyl fragment. In the inner shell, however, the impact of the methyl can be localized and identified by chemical shift. A small, global, red shift is found for the O-K, N-K and sugar C-K spectra, whereas the base C-K spectrum exhibits apparent methyl-related changes.

  19. Multiple Streaming and the Probability Distribution of Density in Redshift Space

    NASA Astrophysics Data System (ADS)

    Hui, Lam; Kofman, Lev; Shandarin, Sergei F.

    2000-07-01

    We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple streaming using the Zeldovich approximation (ZA), and compute the average number of streams in both real and redshift space. We find that multiple streaming can be significant in redshift space but negligible in real space, even at moderate values of the linear fluctuation amplitude (σl<~1). Moreover, unlike their real-space counterparts, redshift-space multiple streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which are physically distinct from the fingers-of-God due to small-scale virialized motions, might in part explain the well-known departure of redshift distortions from the classic linear prediction by Kaiser, even at relatively large scales where the corresponding density field in real space is well described by linear perturbation theory. We also compute, using the ZA, the probability distribution function (PDF) of the density, as well as S3, in real and redshift space, and compare it with the PDF measured from N-body simulations. The role of caustics in defining the character of the high-density tail is examined. We find that (non-Lagrangian) smoothing, due to both finite resolution or discreteness and small-scale velocity dispersions, is very effective in erasing caustic structures, unless the initial power spectrum is sufficiently truncated.

  20. The Lambert Way to Gaussianize Heavy-Tailed Data with the Inverse of Tukey's h Transformation as a Special Case

    PubMed Central

    Goerg, Georg M.

    2015-01-01

    I present a parametric, bijective transformation to generate heavy tail versions of arbitrary random variables. The tail behavior of this heavy tail Lambert  W × F X random variable depends on a tail parameter δ ≥ 0: for δ = 0, Y ≡ X, for δ > 0 Y has heavier tails than X. For X being Gaussian it reduces to Tukey's h distribution. The Lambert W function provides an explicit inverse transformation, which can thus remove heavy tails from observed data. It also provides closed-form expressions for the cumulative distribution (cdf) and probability density function (pdf). As a special case, these yield analytic expression for Tukey's h pdf and cdf. Parameters can be estimated by maximum likelihood and applications to S&P 500 log-returns demonstrate the usefulness of the presented methodology. The R package LambertW implements most of the introduced methodology and is publicly available on CRAN. PMID:26380372

  1. Mesoscopic fluctuations and intermittency in aging dynamics

    NASA Astrophysics Data System (ADS)

    Sibani, P.

    2006-01-01

    Mesoscopic aging systems are characterized by large intermittent noise fluctuations. In a record dynamics scenario (Sibani P. and Dall J., Europhys. Lett., 64 (2003) 8) these events, quakes, are treated as a Poisson process with average αln (1 + t/tw), where t is the observation time, tw is the age and α is a parameter. Assuming for simplicity that quakes constitute the only source of de-correlation, we present a model for the probability density function (PDF) of the configuration autocorrelation function. Beside α, the model has the average quake size 1/q as a parameter. The model autocorrelation PDF has a Gumbel-like shape, which approaches a Gaussian for large t/tw and becomes sharply peaked in the thermodynamic limit. Its average and variance, which are given analytically, depend on t/tw as a power law and a power law with a logarithmic correction, respectively. Most predictions are in good agreement with data from the literature and with the simulations of the Edwards-Anderson spin-glass carried out as a test.

  2. Modelling the structure of Zr-rich Pb(Zr1-xTix)O3, x = 0.4 by a multiphase approach.

    PubMed

    Bogdanov, Alexander; Mysovsky, Andrey; Pickard, Chris J; Kimmel, Anna V

    2016-10-12

    Solid solution perovskite Pb(Zr 1-x Ti x )O 3 (PZT) is an industrially important material. Despite the long history of experimental and theoretical studies, the structure of this material is still under intensive discussion. In this work, we have applied structure searching coupled with density functional theory methods to provide a multiphase description of this material at x = 0.4. We demonstrate that the permutational freedom of B-site cations leads to the stabilisation of a variety of local phases reflecting a relatively flat energy landscape of PZT. Using a set of predicted local phases we reproduce the experimental pair distribution function (PDF) profile with high accuracy. We introduce a complex multiphase picture of the structure of PZT and show that additional monoclinic and rhombohedral phases account for a better description of the experimental PDF profile. We propose that such a multiphase picture reflects the entropy reached in the sample during the preparation process.

  3. Bayesian analysis of the flutter margin method in aeroelasticity

    DOE PAGES

    Khalil, Mohammad; Poirel, Dominique; Sarkar, Abhijit

    2016-08-27

    A Bayesian statistical framework is presented for Zimmerman and Weissenburger flutter margin method which considers the uncertainties in aeroelastic modal parameters. The proposed methodology overcomes the limitations of the previously developed least-square based estimation technique which relies on the Gaussian approximation of the flutter margin probability density function (pdf). Using the measured free-decay responses at subcritical (preflutter) airspeeds, the joint non-Gaussain posterior pdf of the modal parameters is sampled using the Metropolis–Hastings (MH) Markov chain Monte Carlo (MCMC) algorithm. The posterior MCMC samples of the modal parameters are then used to obtain the flutter margin pdfs and finally the fluttermore » speed pdf. The usefulness of the Bayesian flutter margin method is demonstrated using synthetic data generated from a two-degree-of-freedom pitch-plunge aeroelastic model. The robustness of the statistical framework is demonstrated using different sets of measurement data. In conclusion, it will be shown that the probabilistic (Bayesian) approach reduces the number of test points required in providing a flutter speed estimate for a given accuracy and precision.« less

  4. Scaling analysis of Anderson localizing optical fibers

    NASA Astrophysics Data System (ADS)

    Abaie, Behnam; Mafi, Arash

    2017-02-01

    Anderson localizing optical fibers (ALOF) enable a novel optical waveguiding mechanism; if a narrow beam is scanned across the input facet of the disordered fiber, the output beam follows the transverse position of the incoming wave. Strong transverse disorder induces several localized modes uniformly spread across the transverse structure of the fiber. Each localized mode acts like a transmission channel which carries a narrow input beam along the fiber without transverse expansion. Here, we investigate scaling of transverse size of the localized modes of ALOF with respect to transverse dimensions of the fiber. Probability density function (PDF) of the mode-area is applied and it is shown that PDF converges to a terminal shape at transverse dimensions considerably smaller than the previous experimental implementations. Our analysis turns the formidable numerical task of ALOF simulations into a much simpler problem, because the convergence of mode-area PDF to a terminal shape indicates that a much smaller disordered fiber, compared to previous numerical and experimental implementations, provides all the statistical information required for the precise analysis of the fiber.

  5. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  6. Local structure analysis on (La,Ba)(Ga,Mg)O3-δ by the pair distribution function method using a neutron source and density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Kitamura, Naoto; Vogel, Sven C.; Idemoto, Yasushi

    2013-06-01

    In this work, we focused on La0.95Ba0.05Ga0.8Mg0.2O3-δ with the perovskite structure, and investigated the local structure around the oxygen vacancy by pair distribution function (PDF) method and density functional theory (DFT) calculation. By comparing the G(r) simulated based on the DFT calculation and the experimentally-observed G(r), it was suggested that the oxygen vacancy was trapped by Ba2+ at the La3+ site at least at room temperature. Such a defect association may be one of the reasons why the La0.95Ba0.05Ga0.8Mg0.2O3-δ showed lower oxide-ion conductivity than (La,Sr)(Ga,Mg)O3-δ which was widely-used as an electrolyte of the solid oxide fuel cell.

  7. Hybrid finite-volume/transported PDF method for the simulation of turbulent reactive flows

    NASA Astrophysics Data System (ADS)

    Raman, Venkatramanan

    A novel computational scheme is formulated for simulating turbulent reactive flows in complex geometries with detailed chemical kinetics. A Probability Density Function (PDF) based method that handles the scalar transport equation is coupled with an existing Finite Volume (FV) Reynolds-Averaged Navier-Stokes (RANS) flow solver. The PDF formulation leads to closed chemical source terms and facilitates the use of detailed chemical mechanisms without approximations. The particle-based PDF scheme is modified to handle complex geometries and grid structures. Grid-independent particle evolution schemes that scale linearly with the problem size are implemented in the Monte-Carlo PDF solver. A novel algorithm, in situ adaptive tabulation (ISAT) is employed to ensure tractability of complex chemistry involving a multitude of species. Several non-reacting test cases are performed to ascertain the efficiency and accuracy of the method. Simulation results from a turbulent jet-diffusion flame case are compared against experimental data. The effect of micromixing model, turbulence model and reaction scheme on flame predictions are discussed extensively. Finally, the method is used to analyze the Dow Chlorination Reactor. Detailed kinetics involving 37 species and 158 reactions as well as a reduced form with 16 species and 21 reactions are used. The effect of inlet configuration on reactor behavior and product distribution is analyzed. Plant-scale reactors exhibit quenching phenomena that cannot be reproduced by conventional simulation methods. The FV-PDF method predicts quenching accurately and provides insight into the dynamics of the reactor near extinction. The accuracy of the fractional time-stepping technique in discussed in the context of apparent multiple-steady states observed in a non-premixed feed configuration of the chlorination reactor.

  8. On Bayesian Rules for Selecting 3PL Binary Items for Criterion-Referenced Interpretations and Creating Booklets for Bookmark Standard Setting.

    ERIC Educational Resources Information Center

    Huynh, Huynh

    By noting that a Rasch or two parameter logistic (2PL) item belongs to the exponential family of random variables and that the probability density function (pdf) of the correct response (X=1) and the incorrect response (X=0) are symmetric with respect to the vertical line at the item location, it is shown that the conjugate prior for ability is…

  9. Functional PDF Signaling in the Drosophila Circadian Neural Circuit Is Gated by Ral A-Dependent Modulation.

    PubMed

    Klose, Markus; Duvall, Laura; Li, Weihua; Liang, Xitong; Ren, Chi; Steinbach, Joe Henry; Taghert, Paul H

    2016-05-18

    The neuropeptide PDF promotes the normal sequencing of circadian behavioral rhythms in Drosophila, but its signaling mechanisms are not well understood. We report daily rhythmicity in responsiveness to PDF in critical pacemakers called small LNvs. There is a daily change in potency, as great as 10-fold higher, around dawn. The rhythm persists in constant darkness and does not require endogenous ligand (PDF) signaling or rhythmic receptor gene transcription. Furthermore, rhythmic responsiveness reflects the properties of the pacemaker cell type, not the receptor. Dopamine responsiveness also cycles, in phase with that of PDF, in the same pacemakers, but does not cycle in large LNv. The activity of RalA GTPase in s-LNv regulates PDF responsiveness and behavioral locomotor rhythms. Additionally, cell-autonomous PDF signaling reversed the circadian behavioral effects of lowered RalA activity. Thus, RalA activity confers high PDF responsiveness, providing a daily gate around the dawn hours to promote functional PDF signaling. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Functional PDF Signaling in the Drosophila Circadian Neural Circuit is Gated by Ral A-Dependent Modulation

    PubMed Central

    Liang, Xitong; Ren, Chi; Steinbach, Joe Henry; Taghert, Paul H.

    2016-01-01

    The neuropeptide PDF promotes the normal sequencing of circadian behavioral rhythms in Drosophila, but its signaling mechanisms are not well understood. We report daily rhythmicity in responsiveness to PDF in critical pacemakers called small LNvs. There is a daily change in potency, as great as 10-fold higher, around dawn. The rhythm persists in constant darkness, does not require endogenous ligand (PDF) signaling, or rhythmic receptor gene transcription. Furthermore, rhythmic responsiveness reflects the properties of the pacemaker cell type, not the receptor. Dopamine responsiveness also cycles, in phase with that of PDF, in the same pacemakers, but does not cycle in large LNv. The activity of RalA GTPase in s-LNv regulates PDF responsiveness and behavioral locomotor rhythms. Additional, cell autonomous PDF signaling reversed the circadian behavioral effects of lowered RalA activity. Thus RalA activity confers high PDF responsiveness, providing a daily gate around the dawn hours to promote functional PDF signaling. PMID:27161526

  11. A statistical analysis of the elastic distortion and dislocation density fields in deformed crystals

    DOE PAGES

    Mohamed, Mamdouh S.; Larson, Bennett C.; Tischler, Jonathan Z.; ...

    2015-05-18

    The statistical properties of the elastic distortion fields of dislocations in deforming crystals are investigated using the method of discrete dislocation dynamics to simulate dislocation structures and dislocation density evolution under tensile loading. Probability distribution functions (PDF) and pair correlation functions (PCF) of the simulated internal elastic strains and lattice rotations are generated for tensile strain levels up to 0.85%. The PDFs of simulated lattice rotation are compared with sub-micrometer resolution three-dimensional X-ray microscopy measurements of rotation magnitudes and deformation length scales in 1.0% and 2.3% compression strained Cu single crystals to explore the linkage between experiment and the theoreticalmore » analysis. The statistical properties of the deformation simulations are analyzed through determinations of the Nye and Kr ner dislocation density tensors. The significance of the magnitudes and the length scales of the elastic strain and the rotation parts of dislocation density tensors are demonstrated, and their relevance to understanding the fundamental aspects of deformation is discussed.« less

  12. SU-F-T-191: 4D Dose Reconstruction of Intensity Modulated Proton Therapy (IMPT) Based On Breathing Probability Density Function (PDF) From 4D Cone Beam Projection Images: A Study for Lung Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, J; Ding, X; Liang, J

    2016-06-15

    Purpose: With energy repainting in lung IMPT, the dose delivered is approximate to the convolution of dose in each phase with corresponding breathing PDF. This study is to compute breathing PDF weighted 4D dose in lung IMPT treatment and compare to its initial robust plan. Methods: Six lung patients were evaluated in this study. Amsterdam shroud image were generated from pre-treatment 4D cone-beam projections. Diaphragm motion curve was extract from the shroud image and the breathing PDF was generated. Each patient was planned to 60 Gy (12GyX5). In initial plans, ITV density on average CT was overridden with its maximummore » value for planning, using two IMPT beams with robust optimization (5mm uncertainty in patient position and 3.5% range uncertainty). The plan was applied to all 4D CT phases. The dose in each phase was deformed to a reference phase. 4D dose is reconstructed by summing all these doses based on corresponding weighting from the PDF. Plan parameters, including maximum dose (Dmax), ITV V100, homogeneity index (HI=D2/D98), R50 (50%IDL/ITV), and the lung-GTV’s V12.5 and V5 were compared between the reconstructed 4D dose to initial plans. Results: The Dmax is significantly less dose in the reconstructed 4D dose, 68.12±3.5Gy, vs. 70.1±4.3Gy in the initial plans (p=0.015). No significant difference is found for the ITV V100, HI, and R50, 92.2%±15.4% vs. 96.3%±2.5% (p=0.565), 1.033±0.016 vs. 1.038±0.017 (p=0.548), 19.2±12.1 vs. 18.1±11.6 (p=0.265), for the 4D dose and initial plans, respectively. The lung-GTV V12.5 and V5 are significantly high in the 4D dose, 13.9%±4.8% vs. 13.0%±4.6% (p=0.021) and 17.6%±5.4% vs. 16.9%±5.2% (p=0.011), respectively. Conclusion: 4D dose reconstruction based on phase PDF can be used to evaluate the dose received by the patient. A robust optimization based on the phase PDF may even further improve patient care.« less

  13. Pigment-dispersing factor (PDF) has different effects on Drosophila's circadian clocks in the accessory medulla and in the dorsal brain.

    PubMed

    Wülbeck, Corinna; Grieshaber, Eva; Helfrich-Förster, Charlotte

    2008-10-01

    The neuropeptide pigment-dispersing factor (PDF) is a key transmitter in the circadian clock of Drosophila melanogaster. Here we studied the rhythmic behavior of neural mutants with modified arborizations of the large PDF neurons. In sine oculis(1) (so(1)) mutants we found a higher density of PDF fibers in the fly's pacemaker center, the accessory medulla. These flies exhibited a significantly longer period (24.6 h) than control flies. When PDF levels were elevated to very high levels in the dorsal brain as true for so(mda) mutants and small optic lobes;so(1) double mutants (sol(1);so( 1)), a short-period component split off the long period in behavioral rhythmicity. The short period became shorter the higher the amount of PDF in this brain region and reached a value of approximately 21 h. The period alterations were clearly dependent on PDF, because so(1);Pdf 01 and so(mda);Pdf 01 double mutants showed a single free-running component with a period similar to Pdf 01 mutants (approximately 22.5 h) and significantly longer than the short period of so(mda) mutants. These observations indicate that PDF feeds back on the clock neurons and changes their period. Obviously, PDF lengthens the period of some clock neurons and shortens that of others.

  14. Explanation of power law behavior of autoregressive conditional duration processes based on the random multiplicative process

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2004-04-01

    Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.

  15. Evaluation of higher order statistics parameters for multi channel sEMG using different force levels.

    PubMed

    Naik, Ganesh R; Kumar, Dinesh K

    2011-01-01

    The electromyograpy (EMG) signal provides information about the performance of muscles and nerves. The shape of the muscle signal and motor unit action potential (MUAP) varies due to the movement of the position of the electrode or due to changes in contraction level. This research deals with evaluating the non-Gaussianity in Surface Electromyogram signal (sEMG) using higher order statistics (HOS) parameters. To achieve this, experiments were conducted for four different finger and wrist actions at different levels of Maximum Voluntary Contractions (MVCs). Our experimental analysis shows that at constant force and for non-fatiguing contractions, probability density functions (PDF) of sEMG signals were non-Gaussian. For lesser MVCs (below 30% of MVC) PDF measures tends to be Gaussian process. The above measures were verified by computing the Kurtosis values for different MVCs.

  16. Explanation of power law behavior of autoregressive conditional duration processes based on the random multiplicative process.

    PubMed

    Sato, Aki-Hiro

    2004-04-01

    Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.

  17. LES/PDF studies of joint statistics of mixture fraction and progress variable in piloted methane jet flames with inhomogeneous inlet flows

    NASA Astrophysics Data System (ADS)

    Zhang, Pei; Barlow, Robert; Masri, Assaad; Wang, Haifeng

    2016-11-01

    The mixture fraction and progress variable are often used as independent variables for describing turbulent premixed and non-premixed flames. There is a growing interest in using these two variables for describing partially premixed flames. The joint statistical distribution of the mixture fraction and progress variable is of great interest in developing models for partially premixed flames. In this work, we conduct predictive studies of the joint statistics of mixture fraction and progress variable in a series of piloted methane jet flames with inhomogeneous inlet flows. The employed models combine large eddy simulations with the Monte Carlo probability density function (PDF) method. The joint PDFs and marginal PDFs are examined in detail by comparing the model predictions and the measurements. Different presumed shapes of the joint PDFs are also evaluated.

  18. A pdf-Free Change Detection Test Based on Density Difference Estimation.

    PubMed

    Bu, Li; Alippi, Cesare; Zhao, Dongbin

    2018-02-01

    The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.

  19. Influence of turbulent fluctuations on non-equilibrium chemical reactions in the flow

    NASA Astrophysics Data System (ADS)

    Molchanov, A. M.; Yanyshev, D. S.; Bykov, L. V.

    2017-11-01

    In chemically nonequilibrium flows the problem of calculation of sources (formation rates) in equations for chemical species is of utter importance. Formation rate of each component is a non-linear function of mixture density, temperature and concentration of species. Thus the suggestion that the mean rate may be determined via mean values of the flow parameters could lead to significant errors. One of the most accurate approaches here is utilization of probability density function (PDF). In this paper the method for constructing such PDFs is developed. The developed model was verified by comparison with the experimental data. On the example of supersonic combustion it was shown that while the overall effect on the averaged flow field is often negligible, the point of ignition can be considerably shifted up the flow.

  20. The Synergy Between Total Scattering and Advanced Simulation Techniques: Quantifying Geopolymer Gel Evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Claire; Bloomer, Breaunnah E.; Provis, John L.

    2012-05-16

    With the ever increasing demands for technologically advanced structural materials, together with emerging environmental consciousness due to climate change, geopolymer cement is fast becoming a viable alternative to traditional cements due to proven mechanical engineering characteristics and the reduction in CO2 emitted (approximately 80% less CO2 emitted compared to ordinary Portland cement). Nevertheless, much remains unknown regarding the kinetics of the molecular changes responsible for nanostructural evolution during the geopolymerization process. Here, in-situ total scattering measurements in the form of X-ray pair distribution function (PDF) analysis are used to quantify the extent of reaction of metakaolin/slag alkali-activated geopolymer binders, includingmore » the effects of various activators (alkali hydroxide/silicate) on the kinetics of the geopolymerization reaction. Restricting quantification of the kinetics to the initial ten hours of reaction does not enable elucidation of the true extent of the reaction, but using X-ray PDF data obtained after 128 days of reaction enables more accurate determination of the initial extent of reaction. The synergies between the in-situ X-ray PDF data and simulations conducted by multiscale density functional theory-based coarse-grained Monte Carlo analysis are outlined, particularly with regard to the potential for the X-ray data to provide a time scale for kinetic analysis of the extent of reaction obtained from the multiscale simulation methodology.« less

  1. Magnetic discontinuities in magnetohydrodynamic turbulence and in the solar wind.

    PubMed

    Zhdankin, Vladimir; Boldyrev, Stanislav; Mason, Joanne; Perez, Jean Carlos

    2012-04-27

    Recent measurements of solar wind turbulence report the presence of intermittent, exponentially distributed angular discontinuities in the magnetic field. In this Letter, we study whether such discontinuities can be produced by magnetohydrodynamic (MHD) turbulence. We detect the discontinuities by measuring the fluctuations of the magnetic field direction, Δθ, across fixed spatial increments Δx in direct numerical simulations of MHD turbulence with an imposed uniform guide field B(0). A large region of the probability density function (pdf) for Δθ is found to follow an exponential decay, proportional to exp(-Δθ/θ(*)), with characteristic angle θ(*)≈(14°)(b(rms)/B(0))(0.65) for a broad range of guide-field strengths. We find that discontinuities observed in the solar wind can be reproduced by MHD turbulence with reasonable ratios of b(rms)/B(0). We also observe an excess of small angular discontinuities when Δx becomes small, possibly indicating an increasing statistical significance of dissipation-scale structures. The structure of the pdf in this case closely resembles the two-population pdf seen in the solar wind. We thus propose that strong discontinuities are associated with inertial-range MHD turbulence, while weak discontinuities emerge from dissipation-range turbulence. In addition, we find that the structure functions of the magnetic field direction exhibit anomalous scaling exponents, which indicates the existence of intermittent structures.

  2. A General Formulation of the Source Confusion Statistics and Application to Infrared Galaxy Surveys

    NASA Astrophysics Data System (ADS)

    Takeuchi, Tsutomu T.; Ishii, Takako T.

    2004-03-01

    Source confusion has been a long-standing problem in the astronomical history. In the previous formulation of the confusion problem, sources are assumed to be distributed homogeneously on the sky. This fundamental assumption is, however, not realistic in many applications. In this work, by making use of the point field theory, we derive general analytic formulae for the confusion problems with arbitrary distribution and correlation functions. As a typical example, we apply these new formulae to the source confusion of infrared galaxies. We first calculate the confusion statistics for power-law galaxy number counts as a test case. When the slope of differential number counts, γ, is steep, the confusion limits become much brighter and the probability distribution function (PDF) of the fluctuation field is strongly distorted. Then we estimate the PDF and confusion limits based on the realistic number count model for infrared galaxies. The gradual flattening of the slope of the source counts makes the clustering effect rather mild. Clustering effects result in an increase of the limiting flux density with ~10%. In this case, the peak probability of the PDF decreases up to ~15% and its tail becomes heavier. Although the effects are relatively small, they will be strong enough to affect the estimation of galaxy evolution from number count or fluctuation statistics. We also comment on future submillimeter observations.

  3. Statistical Decoupling of a Lagrangian Fluid Parcel in Newtonian Cosmology

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Szalay, Alex

    2016-03-01

    The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differential equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.

  4. STATISTICAL DECOUPLING OF A LAGRANGIAN FLUID PARCEL IN NEWTONIAN COSMOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xin; Szalay, Alex, E-mail: xwang@cita.utoronto.ca

    The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differentialmore » equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.« less

  5. Generating log-normal mock catalog of galaxies in redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agrawal, Aniket; Makiya, Ryu; Saito, Shun

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear biasmore » relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.« less

  6. A large eddy simulation scheme for turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Gao, Feng

    1993-01-01

    The recent development of the dynamic subgrid-scale (SGS) model has provided a consistent method for generating localized turbulent mixing models and has opened up great possibilities for applying the large eddy simulation (LES) technique to real world problems. Given the fact that the direct numerical simulation (DNS) can not solve for engineering flow problems in the foreseeable future (Reynolds 1989), the LES is certainly an attractive alternative. It seems only natural to bring this new development in SGS modeling to bear on the reacting flows. The major stumbling block for introducing LES to reacting flow problems has been the proper modeling of the reaction source terms. Various models have been proposed, but none of them has a wide range of applicability. For example, some of the models in combustion have been based on the flamelet assumption which is only valid for relatively fast reactions. Some other models have neglected the effects of chemical reactions on the turbulent mixing time scale, which is certainly not valid for fast and non-isothermal reactions. The probability density function (PDF) method can be usefully employed to deal with the modeling of the reaction source terms. In order to fit into the framework of LES, a new PDF, the large eddy PDF (LEPDF), is introduced. This PDF provides an accurate representation for the filtered chemical source terms and can be readily calculated in the simulations. The details of this scheme are described.

  7. Transported PDF Modeling of Nonpremixed Turbulent CO/H-2/N-2 Jet Flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, xinyu; Haworth, D. C.; Huckaby, E. David

    2012-01-01

    Turbulent CO/H{sub 2}/N{sub 2} (“syngas”) flames are simulated using a transported composition probability density function (PDF) method. A consistent hybrid Lagrangian particle/Eulerian mesh algorithm is used to solve the modeled PDF transport equation. The model includes standard k–ϵ turbulence, gradient transport for scalars, and Euclidean minimum spanning tree (EMST) mixing. Sensitivities of model results to variations in the turbulence model, the treatment of radiation heat transfer, the choice of chemical mechanism, and the PDF mixing model are explored. A baseline model reproduces the measured mean and rms temperature, major species, and minor species profiles reasonably well, and captures the scalingmore » that is observed in the experiments. Both our results and the literature suggest that further improvements can be realized with adjustments in the turbulence model, the radiation heat transfer model, and the chemical mechanism. Although radiation effects are relatively small in these flames, consideration of radiation is important for accurate NO prediction. Chemical mechanisms that have been developed specifically for fuels with high concentrations of CO and H{sub 2} perform better than a methane mechanism that was not designed for this purpose. It is important to account explicitly for turbulence–chemistry interactions, although the details of the mixing model do not make a large difference in the results, within reasonable limits.« less

  8. Stochastic analysis of concentration field in a wake region.

    PubMed

    Yassin, Mohamed F; Elmi, Abdirashid A

    2011-02-01

    Identifying geographic locations in urban areas from which air pollutants enter the atmosphere is one of the most important information needed to develop effective mitigation strategies for pollution control. Stochastic analysis is a powerful tool that can be used for estimating concentration fluctuation in plume dispersion in a wake region around buildings. Only few studies have been devoted to evaluate applications of stochastic analysis to pollutant dispersion in an urban area. This study was designed to investigate the concentration fields in the wake region using obstacle model such as an isolated building model. We measured concentration fluctuations at centerline of various downwind distances from the source, and different heights with the frequency of 1 KHz. Concentration fields were analyzed stochastically, using the probability density functions (pdf). Stochastic analysis was performed on the concentration fluctuation and the pdf of mean concentration, fluctuation intensity, and crosswind mean-plume dispersion. The pdf of the concentration fluctuation data have shown a significant non-Gaussian behavior. The lognormal distribution appeared to be the best fit to the shape of concentration measured in the boundary layer. We observed that the plume dispersion pdf near the source was shorter than the plume dispersion far from the source. Our findings suggest that the use of stochastic technique in complex building environment can be a powerful tool to help understand the distribution and location of air pollutants.

  9. Multiplicative Process in Turbulent Velocity Statistics: A Simplified Analysis

    NASA Astrophysics Data System (ADS)

    Chillà, F.; Peinke, J.; Castaing, B.

    1996-04-01

    A lot of models in turbulence links the energy cascade process and intermittency, the characteristic of which being the shape evolution of the probability density functions (pdf) for longitudinal velocity increments. Using recent models and experimental results, we show that the flatness factor of these pdf gives a simple and direct estimate for what is called the deepness of the cascade. We analyse in this way the published data of a Direct Numerical Simulation and show that the deepness of the cascade presents the same Reynolds number dependence as in laboratory experiments. Plusieurs modèles de turbulence relient la cascade d'énergie et l'intermittence, caractérisée par l'évolution des densités de probabilité (pdf) des incréments longitudinaux de vitesse. Nous appuyant aussi bien sur des modèles récents que sur des résultats expérimentaux, nous montrons que la Curtosis de ces pdf permet une estimation simple et directe de la profondeur de la cascade. Cela nous permet de réanalyser les résultats publiés d'une simulation numérique et de montrer que la profondeur de la cascade y évolue de la même façon que pour les expériences de laboratoire en fonction du nombre de Reynolds.

  10. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Semi-Disk Structure

    DTIC Science & Technology

    2018-01-01

    statistical moments of order 2, 3, and 4. The probability density function (PDF) of the vibrational time series of a good bearing has a Gaussian...ARL-TR-8271 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...when it is no longer needed. Do not return it to the originator. ARL-TR-8271 ● JAN 2018 US Army Research Laboratory An Automated

  11. Understanding the Influence of Turbulence in Imaging Fourier-Transform Spectrometry of Smokestack Plumes

    DTIC Science & Technology

    2011-03-01

    capability of FTS to estimate plume effluent concentrations by comparing intrusive measurements of aircraft engine exhaust with those from an FTS. A... turbojet engine. Temporal averaging was used to reduce SCAs in the spectra, and spatial maps of temperature and concentration were generated. The time...density function ( PDF ) is the de- fined as the derivative of the CDF, and describes the probability of obtaining a given value of X. For a normally

  12. Sensor Fusion Based on an Integrated Neural Network and Probability Density Function (PDF) Dual Kalman Filter for On-Line Estimation of Vehicle Parameters and States.

    PubMed

    Vargas-Melendez, Leandro; Boada, Beatriz L; Boada, Maria Jesus L; Gauchia, Antonio; Diaz, Vicente

    2017-04-29

    Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33 % of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle's parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle's roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle's states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm.

  13. Sensor Fusion Based on an Integrated Neural Network and Probability Density Function (PDF) Dual Kalman Filter for On-Line Estimation of Vehicle Parameters and States

    PubMed Central

    Vargas-Melendez, Leandro; Boada, Beatriz L.; Boada, Maria Jesus L.; Gauchia, Antonio; Diaz, Vicente

    2017-01-01

    Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33% of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle’s parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle’s roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle’s states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm. PMID:28468252

  14. PDF cycling in the dorsal protocerebrum of the Drosophila brain is not necessary for circadian clock function.

    PubMed

    Kula, Elzbieta; Levitan, Edwin S; Pyza, Elzbieta; Rosbash, Michael

    2006-04-01

    In Drosophila, the neuropeptide pigment-dispersing factor (PDF) is a likely circadian molecule, secreted by central pacemaker neurons (LNvs). PDF is expressed in both small and large LNvs (sLNvs and lLNvs), and there are striking circadian oscillations of PDF staining intensity in the small cell termini, which require a functional molecular clock. This cycling may be relevant to the proposed role of PDF as a synchronizer of the clock system or as an output signal connecting pacemaker cells to locomotor activity centers. In this study, the authors use a generic neuropeptide fusion protein (atrial natriuretic factor-green fluorescent protein [ANF-GFP]) and show that it can be expressed in the same neurons as PDF itself. Yet, ANF-GFP as well as PDF itself does not manifest any cyclical accumulation in sLNv termini in adult transgenic flies. Surprisingly, the absence of detectable PDF cycling is not accompanied by any detectable behavioral pheno-type, since these transgenic flies have normal morning and evening anticipation in a light-dark cycle (LD) and are fully rhythmic in constant darkness (DD). The molecular clock is also not compromised. The results suggest that robust PDF cycling in sLNv termini plays no more than a minor role in the Drosophila circadian system and is apparently not even necessary for clock output function.

  15. Multiple Streaming and the Probability Distribution of Density in Redshift Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hui, Lam; Kofman, Lev; Shandarin, Sergei F.

    2000-07-01

    We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple streaming using the Zeldovich approximation (ZA), and compute the average number of streams in both real and redshift space. We find that multiple streaming can be significant in redshift space but negligible in real space, even at moderate values of the linear fluctuation amplitude ({sigma}{sub l}(less-or-similar sign)1). Moreover, unlike their real-space counterparts, redshift-space multiple streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which aremore » physically distinct from the fingers-of-God due to small-scale virialized motions, might in part explain the well-known departure of redshift distortions from the classic linear prediction by Kaiser, even at relatively large scales where the corresponding density field in real space is well described by linear perturbation theory. We also compute, using the ZA, the probability distribution function (PDF) of the density, as well as S{sub 3}, in real and redshift space, and compare it with the PDF measured from N-body simulations. The role of caustics in defining the character of the high-density tail is examined. We find that (non-Lagrangian) smoothing, due to both finite resolution or discreteness and small-scale velocity dispersions, is very effective in erasing caustic structures, unless the initial power spectrum is sufficiently truncated. (c) 2000 The American Astronomical Society.« less

  16. Design for cyclic loading endurance of composites

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Murthy, Pappu L. N.; Chamis, Christos C.; Liaw, Leslie D. G.

    1993-01-01

    The application of the computer code IPACS (Integrated Probabilistic Assessment of Composite Structures) to aircraft wing type structures is described. The code performs a complete probabilistic analysis for composites taking into account the uncertainties in geometry, boundary conditions, material properties, laminate lay-ups, and loads. Results of the analysis are presented in terms of cumulative distribution functions (CDF) and probability density function (PDF) of the fatigue life of a wing type composite structure under different hygrothermal environments subjected to the random pressure. The sensitivity of the fatigue life to a number of critical structural/material variables is also computed from the analysis.

  17. The GABA(A) receptor RDL acts in peptidergic PDF neurons to promote sleep in Drosophila.

    PubMed

    Chung, Brian Y; Kilman, Valerie L; Keath, J Russel; Pitman, Jena L; Allada, Ravi

    2009-03-10

    Sleep is regulated by a circadian clock that times sleep and wake to specific times of day and a homeostat that drives sleep as a function of prior wakefulness. To analyze the role of the circadian clock, we have used the fruit fly Drosophila. Flies display the core behavioral features of sleep, including relative immobility, elevated arousal thresholds, and homeostatic regulation. We assessed sleep-wake modulation by a core set of circadian pacemaker neurons that express the neuropeptide PDF. We find that disruption of PDF function increases sleep during the late night in light:dark and the first subjective day of constant darkness. Flies deploy genetic and neurotransmitter pathways to regulate sleep that are similar to those of their mammalian counterparts, including GABA. We find that RNA interference-mediated knockdown of the GABA(A) receptor gene, Resistant to dieldrin (Rdl), in PDF neurons reduces sleep, consistent with a role for GABA in inhibiting PDF neuron function. Patch-clamp electrophysiology reveals GABA-activated picrotoxin-sensitive chloride currents on PDF+ neurons. In addition, RDL is detectable most strongly on the large subset of PDF+ pacemaker neurons. These results suggest that GABAergic inhibition of arousal-promoting PDF neurons is an important mode of sleep-wake regulation in vivo.

  18. A statistical study of gyro-averaging effects in a reduced model of drift-wave transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca, Julio; Del-Castillo-Negrete, Diego B.; Sokolov, Igor M.

    2016-08-25

    Here, a statistical study of finite Larmor radius (FLR) effects on transport driven by electrostatic driftwaves is presented. The study is based on a reduced discrete Hamiltonian dynamical system known as the gyro-averaged standard map (GSM). In this system, FLR effects are incorporated through the gyro-averaging of a simplified weak-turbulence model of electrostatic fluctuations. Formally, the GSM is a modified version of the standard map in which the perturbation amplitude, K 0, becomes K 0J 0(more » $$\\hat{p}$$), where J 0 is the zeroth-order Bessel function and $$\\hat{p}$$ s the Larmor radius. Assuming a Maxwellian probability density function (pdf) for $$\\hat{p}$$ , we compute analytically and numerically the pdf and the cumulative distribution function of the effective drift-wave perturba- tion amplitude K 0J 0($$\\hat{p}$$). Using these results, we compute the probability of loss of confinement (i.e., global chaos), P c provides an upper bound for the escape rate, and that P t rovides a good estimate of the particle trapping rate. Lastly. the analytical results are compared with direct numerical Monte-Carlo simulations of particle transport.« less

  19. Evolution and statistics of non-sphericity of dark matter halos from cosmological N-body simulation

    NASA Astrophysics Data System (ADS)

    Suto, Daichi; Kitayama, Tetsu; Nishimichi, Takahiro; Sasaki, Shin; Suto, Yasushi

    2016-12-01

    We revisit the non-sphericity of cluster-mass-scale halos from cosmological N-body simulation on the basis of triaxial modeling. In order to understand the difference between the simulation results and the conventional ellipsoidal collapse model (EC), we first consider the evolution of individual simulated halos. The major difference between EC and the simulation becomes appreciable after the turnaround epoch. Moreover, it is sensitive to the individual evolution history of each halo. Despite such strong dependence on individual halos, the resulting non-sphericity of halos exhibits weak but robust mass dependence in a statistical fashion; massive halos are more spherical up to the turnaround, but gradually become less spherical by z = 0. This is clearly inconsistent with the EC prediction: massive halos are usually more spherical. In addition, at z = 0, inner regions of the simulated halos are less spherical than outer regions; that is, the density distribution inside the halos is highly inhomogeneous and therefore not self-similar (concentric ellipsoids with the same axis ratio and orientation). This is also inconsistent with the homogeneous density distribution that is commonly assumed in EC. Since most of previous fitting formulae for the probability distribution function (PDF) of the axis ratio of triaxial ellipsoids have been constructed under the self-similarity assumption, they are not accurate. Indeed, we compute the PDF of the projected axis ratio a1/a2 directly from the simulation data without the self-similarity assumption, and find that it is very sensitive to the assumption. The latter needs to be carefully taken into account in direct comparison with observations, and therefore we provide an empirical fitting formula for the PDF of a1/a2. Our preliminary analysis suggests that the derived PDF of a1/a2 roughly agrees with the current weak-lensing observations. More importantly, the present results will be useful for future exploration of the non-sphericity of clusters in X-ray and optical observations.

  20. Clinical indices of in vivo biocompatibility: the role of ex vivo cell function studies and effluent markers in peritoneal dialysis patients.

    PubMed

    Mackenzie, Ruth; Holmes, Clifford J; Jones, Suzanne; Williams, John D; Topley, Nicholas

    2003-12-01

    Clinical indices of in vivo biocompatibility: The role of ex vivo cell function studies and effluent markers in peritoneal dialysis patients. Over the past 20 years, studies of the biocompatibility profile of peritoneal dialysis solutions (PDF) have evolved from initial in vitro studies assessing the impact of solutions on leukocyte function to evaluations of mesothelial cell behavior. More recent biocompatibility evaluations have involved assessments of the impact of PDF on membrane integrity and cell function in peritoneal dialysis (PD) patients. The development of ex vivo systems for the evaluation of in vivo cell function, and effluent markers of membrane integrity and inflammation in patients exposed both acutely and chronically to conventional and new PDF will be interpreted in the context of our current understanding of the biology of the dialyzed peritoneum. The available data indicate that exposure of the peritoneal environment to more biocompatible PDF is associated with improvements in peritoneal cell function, alterations in markers of membrane integrity, and reduced local inflammation. These data suggest that more biocompatible PDF will have a positive impact on host defense, peritoneal homeostasis, and the long-term preservation of peritoneal membrane function in PD patients.

  1. Lotka-Volterra system in a random environment.

    PubMed

    Dimentberg, Mikhail F

    2002-03-01

    Classical Lotka-Volterra (LV) model for oscillatory behavior of population sizes of two interacting species (predator-prey or parasite-host pairs) is conservative. This may imply unrealistically high sensitivity of the system's behavior to environmental variations. Thus, a generalized LV model is considered with the equation for preys' reproduction containing the following additional terms: quadratic "damping" term that accounts for interspecies competition, and term with white-noise random variations of the preys' reproduction factor that simulates the environmental variations. An exact solution is obtained for the corresponding Fokker-Planck-Kolmogorov equation for stationary probability densities (PDF's) of the population sizes. It shows that both population sizes are independent gamma-distributed stationary random processes. Increasing level of the environmental variations does not lead to extinction of the populations. However it may lead to an intermittent behavior, whereby one or both population sizes experience very rare and violent short pulses or outbreaks while remaining on a very low level most of the time. This intermittency is described analytically by direct use of the solutions for the PDF's as well as by applying theory of excursions of random functions and by predicting PDF of peaks in the predators' population size.

  2. Lotka-Volterra system in a random environment

    NASA Astrophysics Data System (ADS)

    Dimentberg, Mikhail F.

    2002-03-01

    Classical Lotka-Volterra (LV) model for oscillatory behavior of population sizes of two interacting species (predator-prey or parasite-host pairs) is conservative. This may imply unrealistically high sensitivity of the system's behavior to environmental variations. Thus, a generalized LV model is considered with the equation for preys' reproduction containing the following additional terms: quadratic ``damping'' term that accounts for interspecies competition, and term with white-noise random variations of the preys' reproduction factor that simulates the environmental variations. An exact solution is obtained for the corresponding Fokker-Planck-Kolmogorov equation for stationary probability densities (PDF's) of the population sizes. It shows that both population sizes are independent γ-distributed stationary random processes. Increasing level of the environmental variations does not lead to extinction of the populations. However it may lead to an intermittent behavior, whereby one or both population sizes experience very rare and violent short pulses or outbreaks while remaining on a very low level most of the time. This intermittency is described analytically by direct use of the solutions for the PDF's as well as by applying theory of excursions of random functions and by predicting PDF of peaks in the predators' population size.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manshour, Pouya; Ghasemi, Fatemeh; Sahimi, Muhammad

    High-quality measurements of seismic activities around the world provide a wealth of data and information that are relevant to understanding of when earthquakes may occur. If viewed as complex stochastic time series, such data may be analyzed by methods that provide deeper insights into their nature, hence leading to better understanding of the data and their possible implications for earthquakes. In this paper, we provide further evidence for our recent proposal [P. Mansour et al., Phys. Rev. Lett. 102, 014101 (2009)] for the existence of a transition in the shape of the probability density function (PDF) of the successive detrendedmore » increments of the stochastic fluctuations of Earth's vertical velocity V{sub z}, collected by broadband stations before moderate and large earthquakes. To demonstrate the transition, we carried out extensive analysis of the data for V{sub z} for 12 earthquakes in several regions around the world, including the recent catasrophic one in Haiti. The analysis supports the hypothesis that before and near the time of an earthquake, the shape of the PDF undergoes significant and discernable changes, which can be characterized quantitatively. The typical time over which the PDF undergoes the transition is about 5-10 h prior to a moderate or large earthquake.« less

  4. Modeling molecular mixing in a spatially inhomogeneous turbulent flow

    NASA Astrophysics Data System (ADS)

    Meyer, Daniel W.; Deb, Rajdeep

    2012-02-01

    Simulations of spatially inhomogeneous turbulent mixing in decaying grid turbulence with a joint velocity-concentration probability density function (PDF) method were conducted. The inert mixing scenario involves three streams with different compositions. The mixing model of Meyer ["A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows," Phys. Fluids 22(3), 035103 (2010)], the interaction by exchange with the mean (IEM) model and its velocity-conditional variant, i.e., the IECM model, were applied. For reference, the direct numerical simulation data provided by Sawford and de Bruyn Kops ["Direct numerical simulation and lagrangian modeling of joint scalar statistics in ternary mixing," Phys. Fluids 20(9), 095106 (2008)] was used. It was found that velocity conditioning is essential to obtain accurate concentration PDF predictions. Moreover, the model of Meyer provides significantly better results compared to the IECM model at comparable computational expense.

  5. CDF and PDF Comparison Between Humacao, Puerto Rico and Florida

    NASA Technical Reports Server (NTRS)

    Gonzalez-Rodriguez, Rosana

    2004-01-01

    The knowledge of the atmospherics phenomenon is an important part in the communication system. The principal factor that contributes to the attenuation in a Ka band communication system is the rain attenuation. We have four years of tropical region observations. The data in the tropical region was taken in Humacao, Puerto Rico. Previous data had been collected at various climate regions such as desserts, template area and sub-tropical regions. Figure 1 shows the ITU-R rain zone map for North America. Rain rates are important to the rain attenuation prediction models. The models that predict attenuation generally are of two different kinds. The first one is the regression models. By using a data set these models provide an idea of the observed attenuation and rain rates distribution in the present, past and future. The second kinds of models are physical models which use the probability density functions (PDF).

  6. Comparative study of probability distribution distances to define a metric for the stability of multi-source biomedical research data.

    PubMed

    Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan Miguel

    2013-01-01

    Research biobanks are often composed by data from multiple sources. In some cases, these different subsets of data may present dissimilarities among their probability density functions (PDF) due to spatial shifts. This, may lead to wrong hypothesis when treating the data as a whole. Also, the overall quality of the data is diminished. With the purpose of developing a generic and comparable metric to assess the stability of multi-source datasets, we have studied the applicability and behaviour of several PDF distances over shifts on different conditions (such as uni- and multivariate, different types of variable, and multi-modality) which may appear in real biomedical data. From the studied distances, we found information-theoretic based and Earth Mover's Distance to be the most practical distances for most conditions. We discuss the properties and usefulness of each distance according to the possible requirements of a general stability metric.

  7. Laser transit anemometer software development program

    NASA Technical Reports Server (NTRS)

    Abbiss, John B.

    1989-01-01

    Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.

  8. Study on typhoon characteristic based on bridge health monitoring system.

    PubMed

    Wang, Xu; Chen, Bin; Sun, Dezhang; Wu, Yinqiang

    2014-01-01

    Through the wind velocity and direction monitoring system installed on Jiubao Bridge of Qiantang River, Hangzhou city, Zhejiang province, China, a full range of wind velocity and direction data was collected during typhoon HAIKUI in 2012. Based on these data, it was found that, at higher observed elevation, turbulence intensity is lower, and the variation tendency of longitudinal and lateral turbulence intensities with mean wind speeds is basically the same. Gust factor goes higher with increasing mean wind speed, and the change rate obviously decreases as wind speed goes down and an inconspicuous increase occurs when wind speed is high. The change of peak factor is inconspicuous with increasing time and mean wind speed. The probability density function (PDF) of fluctuating wind speed follows Gaussian distribution. Turbulence integral scale increases with mean wind speed, and its PDF does not follow Gaussian distribution. The power spectrum of observation fluctuating velocity is in accordance with Von Karman spectrum.

  9. The queueing perspective of asynchronous network coding in two-way relay network

    NASA Astrophysics Data System (ADS)

    Liang, Yaping; Chang, Qing; Li, Xianxu

    2018-04-01

    Asynchronous network coding (NC) has potential to improve the wireless network performance compared with a routing or the synchronous network coding. Recent researches concentrate on the optimization between throughput/energy consuming and delay with a couple of independent input flow. However, the implementation of NC requires a thorough investigation of its impact on relevant queueing systems where few work focuses on. Moreover, few works study the probability density function (pdf) in network coding scenario. In this paper, the scenario with two independent Poisson input flows and one output flow is considered. The asynchronous NC-based strategy is that a new arrival evicts a head packet holding in its queue when waiting for another packet from the other flow to encode. The pdf for the output flow which contains both coded and uncoded packets is derived. Besides, the statistic characteristics of this strategy are analyzed. These results are verified by numerical simulations.

  10. Risk assessment of turbine rotor failure using probabilistic ultrasonic non-destructive evaluations

    NASA Astrophysics Data System (ADS)

    Guan, Xuefei; Zhang, Jingdan; Zhou, S. Kevin; Rasselkorde, El Mahjoub; Abbasi, Waheed A.

    2014-02-01

    The study presents a method and application of risk assessment methodology for turbine rotor fatigue failure using probabilistic ultrasonic nondestructive evaluations. A rigorous probabilistic modeling for ultrasonic flaw sizing is developed by incorporating the model-assisted probability of detection, and the probability density function (PDF) of the actual flaw size is derived. Two general scenarios, namely the ultrasonic inspection with an identified flaw indication and the ultrasonic inspection without flaw indication, are considered in the derivation. To perform estimations for fatigue reliability and remaining useful life, uncertainties from ultrasonic flaw sizing and fatigue model parameters are systematically included and quantified. The model parameter PDF is estimated using Bayesian parameter estimation and actual fatigue testing data. The overall method is demonstrated using a realistic application of steam turbine rotor, and the risk analysis under given safety criteria is provided to support maintenance planning.

  11. Basic EMC (Electromagnetic compatibility) technology advancement for C3 (Command, control, and communications) systems. Volume 6

    NASA Astrophysics Data System (ADS)

    Weiner, D.; Paul, C. R.; Whalen, J.

    1985-04-01

    This research effort was devoted to eliminating some of the basic technological gaps in the two important areas of: (1) electromagnetic effects (EM) on microelectronic circuits and (2) EM coupling and testing. The results are presented in fourteen reports which have been organized into six volumes. The reports are briefly summarized in this volume. In addition, an experiment is described which was performed to demonstrate the feasibility of applying several of the results to a problem involving electromagnetic interference. Specifically, experimental results are provided for the randomness associated with: (1) crosstalk in cable harnesses and (2) demodulation of amplitude modulated (AM) signals in operational amplifiers. These results are combined to predict candidate probability density functions (pdf's) for the amplitude of an AM interfering signal required to turn on a light emitting diode. The candidate pdf's are shown to be statistically consistent with measured data.

  12. High-pressure pair distribution function (PDF) measurement using high-energy focused x-ray beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Xinguo, E-mail: xhong@bnl.gov; Weidner, Donald J.; Ehm, Lars

    In this paper, we report recent development of the high-pressure pair distribution function (HP-PDF) measurement technique using a focused high-energy X-ray beam coupled with a diamond anvil cell (DAC). The focusing optics consist of a sagittally bent Laue monochromator and Kirkpatrick-Baez (K–B) mirrors. This combination provides a clean high-energy X-ray beam suitable for HP-PDF research. Demonstration of the HP-PDF technique for nanocrystalline platinum under quasi-hydrostatic condition above 30 GPa is presented.

  13. Assimilating every-30-second 100-m-mesh radar observations for convective weather: implications to non-Gaussian PDF

    NASA Astrophysics Data System (ADS)

    Miyoshi, T.; Teramura, T.; Ruiz, J.; Kondo, K.; Lien, G. Y.

    2016-12-01

    Convective weather is known to be highly nonlinear and chaotic, and it is hard to predict their location and timing precisely. Our Big Data Assimilation (BDA) effort has been exploring to use dense and frequent observations to avoid non-Gaussian probability density function (PDF) and to apply an ensemble Kalman filter under the Gaussian error assumption. The phased array weather radar (PAWR) can observe a dense three-dimensional volume scan with 100-m range resolution and 100 elevation angles in only 30 seconds. The BDA system assimilates the PAWR reflectivity and Doppler velocity observations every 30 seconds into 100 ensemble members of storm-scale numerical weather prediction (NWP) model at 100-m grid spacing. The 30-second-update, 100-m-mesh BDA system has been quite successful in multiple case studies of local severe rainfall events. However, with 1000 ensemble members, the reduced-resolution BDA system at 1-km grid spacing showed significant non-Gaussian PDF with every-30-second updates. With a 10240-member ensemble Kalman filter with a global NWP model at 112-km grid spacing, we found roughly 1000 members satisfactory to capture the non-Gaussian error structures. With these in mind, we explore how the density of observations in space and time affects the non-Gaussianity in an ensemble Kalman filter with a simple toy model. In this presentation, we will present the most up-to-date results of the BDA research, as well as the investigation with the toy model on the non-Gaussianity with dense and frequent observations.

  14. The relationship between CO emission and visual extinction traced by dust emission in the Magellanic Clouds

    NASA Astrophysics Data System (ADS)

    Lee, Cheoljong; Leroy, Adam K.; Schnee, Scott; Wong, Tony; Bolatto, Alberto D.; Indebetouw, Remy; Rubio, Monica

    2015-07-01

    To test the theoretical understanding that finding bright CO emission depends primarily on dust shielding, we investigate the relationship between CO emission (ICO) and the amount of dust (estimated from infrared emission and expressed as `AV') across the Large Magellanic Cloud (LMC), the Small Magellanic Cloud, and the Milky Way. We show that at our common resolution of 10 pc scales, ICO given a fixed line of sight AV is similar across all three systems despite the difference in metallicity. We find some evidence for a secondary dependence of ICO on radiation field; in the LMC, ICO at a given AV is smaller in regions of high Tdust, perhaps because of an increased photodissociating radiation field. We suggest a simple but useful picture in which the CO-to-H2 conversion factor (XCO) depends on two separable factors: (1) the distribution of gas column densities, which maps to an extinction distribution via a dust-to-gas ratio; and (2) the dependence of ICO on AV. Assuming that the probability distribution function (PDF) of local Milky Way clouds is universal, this approach predicts a dependence of {X_CO} on Z between Z-1 and Z-2 above about a third solar metallicity. Below this metallicity, CO emerges from only the high column density parts of the cloud and so depends very sensitively on the adopted PDF and the H2/H I prescription. The PDF of low-metallicity clouds is thus of considerable interest and the uncertainty associated with even an ideal prescription for XCO at very low metallicity will be large.

  15. Local structure studies of materials using pair distribution function analysis

    NASA Astrophysics Data System (ADS)

    Peterson, Joseph W.

    A collection of pair distribution function studies on various materials is presented in this dissertation. In each case, local structure information of interest pushes the current limits of what these studies can accomplish. The goal is to provide insight into the individual material behaviors as well as to investigate ways to expand the current limits of PDF analysis. Where possible, I provide a framework for how PDF analysis might be applied to a wider set of material phenomena. Throughout the dissertation, I discuss 0 the capabilities of the PDF method to provide information pertaining to a material's structure and properties, ii) current limitations in the conventional approach to PDF analysis, iii) possible solutions to overcome certain limitations in PDF analysis, and iv) suggestions for future work to expand and improve the capabilities PDF analysis.

  16. PDF-modulated visual inputs and cryptochrome define diurnal behavior in Drosophila.

    PubMed

    Cusumano, Paola; Klarsfeld, André; Chélot, Elisabeth; Picot, Marie; Richier, Benjamin; Rouyer, François

    2009-11-01

    Morning and evening circadian oscillators control the bimodal activity of Drosophila in light-dark cycles. The lateral neurons evening oscillator (LN-EO) is important for promoting diurnal activity at dusk. We found that the LN-EO autonomously synchronized to light-dark cycles through either the cryptochrome (CRY) that it expressed or the visual system. In conditions in which CRY was not activated, flies depleted for pigment-dispersing factor (PDF) or its receptor lost the evening activity and displayed reversed PER oscillations in the LN-EO. Rescue experiments indicated that normal PER cycling and the presence of evening activity relied on PDF secretion from the large ventral lateral neurons and PDF receptor function in the LN-EO. The LN-EO thus integrates light inputs and PDF signaling to control Drosophila diurnal behavior, revealing a new clock-independent function for PDF.

  17. RFI in hybrid loops - Simulation and experimental results.

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.

    1972-01-01

    A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.

  18. Monograph on the use of the multivariate Gram Charlier series Type A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatayodom, T.; Heydt, G.

    1978-01-01

    The Gram-Charlier series in an infinite series expansion for a probability density function (pdf) in which terms of the series are Hermite polynomials. There are several Gram-Charlier series - the best known is Type A. The Gram-Charlier series, Type A (GCA) exists for both univariate and multivariate random variables. This monograph introduces the multivariate GCA and illustrates its use through several examples. A brief bibliography and discussion of Hermite polynomials is also included. 9 figures, 2 tables.

  19. Inverse Problems in Complex Models and Applications to Earth Sciences

    NASA Astrophysics Data System (ADS)

    Bosch, M. E.

    2015-12-01

    The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.

  20. EUPDF: Eulerian Monte Carlo Probability Density Function Solver for Applications With Parallel Computing, Unstructured Grids, and Sprays

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    The success of any solution methodology used in the study of gas-turbine combustor flows depends a great deal on how well it can model the various complex and rate controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as convective and radiative heat transfer and other phenomena. The phenomena to be modeled, which are controlled by these processes, often strongly interact with each other at different times and locations. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. The influence of turbulence in a diffusion flame manifests itself in several forms, ranging from the so-called wrinkled, or stretched, flamelets regime to the distributed combustion regime, depending upon how turbulence interacts with various flame scales. Conventional turbulence models have difficulty treating highly nonlinear reaction rates. A solution procedure based on the composition joint probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices (such as extinction, blowoff limits, and emissions predictions) because it can account for nonlinear chemical reaction rates without making approximations. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on the PDF method to unstructured grids, parallel computing, and sprays. EUPDF, which was developed by M.S. Raju of Nyma, Inc., was designed to be massively parallel and could easily be coupled with any existing gas-phase and/or spray solvers. EUPDF can use an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements. The application of the PDF method showed favorable results when applied to several supersonic-diffusion flames and spray flames. The EUPDF source code will be available with the National Combustion Code (NCC) as a complete package.

  1. Chronostratigraphical Subdivision of the Late Glacial and the Holocene for the Alaska Region

    NASA Astrophysics Data System (ADS)

    Michczynska, D. J.; Hajdas, I.

    2009-04-01

    Our work is a kind of so called data mining. The first step of our work was collection of the radiocarbon data for samples coming from Alaska. We construct data base using Radiocarbon Measurements Lists published by different radiocarbon laboratories (mainly in the journal 'Radiocaron'). The next step was careful analysis of collected dates. We excluded from our analysis all dates suspected of contamination by younger or older organic matter. Such fact could be stated, for instance, on the base of inconsistency of radiocarbon age and stratigraphy or palynology. Finally, we calibrated whole large set of chosen radiocarbon dates and construct probability density function (PDF). Analysis of the shape of PDF was the subject of the previous research (eg. Michczynska and Pazdur, 2004; Macklin et al., 2006; Starkel et al., 2006, Michczynska et al., 2007). In our analysis we take into account the distinct tendency to collect samples from specific horizons. It is a general rule to take samples for radiocarbon dating from places of visible sedimentation changes or changes in palynological diagram. Therefore the culminations of the PDF represent periods of environmental changes and could be helpful in identifying the chronostratigraphical boundaries on the calendar time scale. References: Michczyńska D.J., Pazdur A., 2004. A shape analysis of cumulative probability density function of radiocarbon dates set in the study of climate change in Late Glacial and Holocene. Radiocarbon 46(2): 733-744. Michczyńska D.J., Michczyński A., Pazdur A. 2007. Frequency distribution of radiocarbon dates as a tool for reconstructing environmental changes. Radiocarbon 49(2): 799-806. Macklin M.G., Benito G., Gregory K.J., Johnstone E., Lewin J., Michczyńska D.J., Soja R., Starkel L., Thorndycraft V.R., 2006. Past hydrological events reflected in the Holocene fluvial record of Europe. CATENA 66: 145-154. Starkel L., Soja R., Michczyńska D.J., 2006. Past hydrological events reflected in Holocene history of Polish rivers. CATENA 66: 24-33.

  2. Q-space truncation and sampling in diffusion spectrum imaging.

    PubMed

    Tian, Qiyuan; Rokem, Ariel; Folkerth, Rebecca D; Nummenmaa, Aapo; Fan, Qiuyun; Edlow, Brian L; McNab, Jennifer A

    2016-12-01

    To characterize the q-space truncation and sampling on the spin-displacement probability density function (PDF) in diffusion spectrum imaging (DSI). DSI data were acquired using the MGH-USC connectome scanner (G max  = 300 mT/m) with b max  = 30,000 s/mm 2 , 17 × 17 × 17, 15 × 15 × 15 and 11 × 11 × 11 grids in ex vivo human brains and b max  = 10,000 s/mm 2 , 11 × 11 × 11 grid in vivo. An additional in vivo scan using b max =7,000 s/mm 2 , 11 × 11 × 11 grid was performed with a derated gradient strength of 40 mT/m. PDFs and orientation distribution functions (ODFs) were reconstructed with different q-space filtering and PDF integration lengths, and from down-sampled data by factors of two and three. Both ex vivo and in vivo data showed Gibbs ringing in PDFs, which becomes the main source of artifact in the subsequently reconstructed ODFs. For down-sampled data, PDFs interfere with the first replicas or their ringing, leading to obscured orientations in ODFs. The minimum required q-space sampling density corresponds to a field-of-view approximately equal to twice the mean displacement distance (MDD) of the tissue. The 11 × 11 × 11 grid is suitable for both ex vivo and in vivo DSI experiments. To minimize the effects of Gibbs ringing, ODFs should be reconstructed from unfiltered q-space data with the integration length over the PDF constrained to around the MDD. Magn Reson Med 76:1750-1763, 2016. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  3. Solution of the finite Milne problem in stochastic media with RVT Technique

    NASA Astrophysics Data System (ADS)

    Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.

    2017-12-01

    This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.

  4. Carrier Modulation Via Waveform Probability Density Function

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2006-01-01

    Beyond the classic modes of carrier modulation by varying amplitude (AM), phase (PM), or frequency (FM), we extend the modulation domain of an analog carrier signal to include a class of general modulations which are distinguished by their probability density function histogram. Separate waveform states are easily created by varying the pdf of the transmitted waveform. Individual waveform states are assignable as proxies for digital one's or zero's. At the receiver, these states are easily detected by accumulating sampled waveform statistics and performing periodic pattern matching, correlation, or statistical filtering. No fundamental physical laws are broken in the detection process. We show how a typical modulation scheme would work in the digital domain and suggest how to build an analog version. We propose that clever variations of the modulating waveform (and thus the histogram) can provide simple steganographic encoding.

  5. Carrier Modulation Via Waveform Probability Density Function

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2004-01-01

    Beyond the classic modes of carrier modulation by varying amplitude (AM), phase (PM), or frequency (FM), we extend the modulation domain of an analog carrier signal to include a class of general modulations which are distinguished by their probability density function histogram. Separate waveform states are easily created by varying the pdf of the transmitted waveform. Individual waveform states are assignable as proxies for digital ONEs or ZEROs. At the receiver, these states are easily detected by accumulating sampled waveform statistics and performing periodic pattern matching, correlation, or statistical filtering. No fundamental natural laws are broken in the detection process. We show how a typical modulation scheme would work in the digital domain and suggest how to build an analog version. We propose that clever variations of the modulating waveform (and thus the histogram) can provide simple steganographic encoding.

  6. PDF modeling of near-wall turbulent flows

    NASA Astrophysics Data System (ADS)

    Dreeben, Thomas David

    1997-06-01

    Pdf methods are extended to include modeling of wall- bounded turbulent flows. For flows in which resolution of the viscous sublayer is desired, a Pdf near-wall model is developed in which the Generalized Langevin model is combined with an exact model for viscous transport. Durbin's method of elliptic relaxation is used to incorporate the wall effects into the governing equations without the use of wall functions or damping functions. Close to the wall, the Generalized Langevin model provides an analogy to the effect of the fluctuating continuity equation. This enables accurate modeling of the near-wall turbulent statistics. Demonstrated accuracy for fully-developed channel flow is achieved with a Pdf/Monte Carlo simulation, and with its related Reynolds-stress closure. For flows in which the details of the viscous sublayer are not important, a Pdf wall- function method is developed with the Simplified Langevin model.

  7. Comparison of Fatigue Life Estimation Using Equivalent Linearization and Time Domain Simulation Methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Dhainaut, Jean-Michel

    2000-01-01

    The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.

  8. Application of Non-Equilibrium Thermo Field Dynamics to quantum teleportation under the environment

    NASA Astrophysics Data System (ADS)

    Kitajima, S.; Arimitsu, T.; Obinata, M.; Yoshida, K.

    2014-06-01

    Quantum teleportation for continuous variables is treated by Non-Equilibrium Thermo Field Dynamics (NETFD), a canonical operator formalism for dissipative quantum systems, in order to study the effect of imperfect quantum entanglement on quantum communication. We used an entangled state constructed by two squeezed states. The entangled state is imperfect due to two reasons, i.e., one is the finiteness of the squeezing parameter r and the other comes from the process that the squeezed states are created under the dissipative interaction with the environment. We derive the expressions for one-shot fidelity (OSF), probability density function (PDF) associated with OSF and (averaged) fidelity by making full use of the algebraic manipulation of operator algebra within NETFD. We found that OSF and PDF are given by Gaussian forms with its peak at the original information α to be teleported, and that for r≫1 the variances of these quantities blow up to infinity for κ/χ≤1, while they approach to finite values for κ/χ>1. Here, χ represents the intensity of a degenerate parametric process, and κ the relaxation rate due to the interaction with the environment. The blow-up of the variances for OSF and PDF guarantees higher security against eavesdropping. With the blow-up of the variances, the height of PDF reduces to small because of the normalization of probability, while the height of OSF approaches to 1 indicating a higher performance of the quantum teleportation. We also found that in the limit κ/χ≫1 the variances of both OSF and PDF for any value of r (>0) reduce to 1 which is the same value as the case r=0, i.e., no entanglement.

  9. New stochastic approach for extreme response of slow drift motion of moored floating structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kato, Shunji; Okazaki, Takashi

    1995-12-31

    A new stochastic method for investigating the flow drift response statistics of moored floating structures is described. Assuming that wave drift excitation process can be driven by a Gaussian white noise process, an exact stochastic equation governing a time evolution of the response Probability Density Function (PDF) is derived on a basis of Projection operator technique in the field of statistical physics. In order to get an approximate solution of the GFP equation, the authors develop the renormalized perturbation technique which is a kind of singular perturbation methods and solve the GFP equation taken into account up to third ordermore » moments of a non-Gaussian excitation. As an example of the present method, a closed form of the joint PDF is derived for linear response in surge motion subjected to a non-Gaussian wave drift excitation and it is represented by the product of a form factor and the quasi-Cauchy PDFs. In this case, the motion displacement and velocity processes are not mutually independent if the excitation process has a significant third order moment. From a comparison between the response PDF by the present solution and the exact one derived by Naess, it is found that the present solution is effective for calculating both the response PDF and the joint PDF. Furthermore it is shown that the displacement-velocity independence is satisfied if the damping coefficient in equation of motion is not so large and that both the non-Gaussian property of excitation and the damping coefficient should be taken into account for estimating the probability exceedance of the response.« less

  10. Intermittent turbulence and turbulent structures in LAPD and ET

    NASA Astrophysics Data System (ADS)

    Carter, T. A.; Pace, D. C.; White, A. E.; Gauvreau, J.-L.; Gourdain, P.-A.; Schmitz, L.; Taylor, R. J.

    2006-12-01

    Strongly intermittent turbulence is observed in the shadow of a limiter in the Large Plasma Device (LAPD) and in both the inboard and outboard scrape-off-layer (SOL) in the Electric Tokamak (ET) at UCLA. In LAPD, the amplitude probability distribution function (PDF) of the turbulence is strongly skewed, with density depletion events (or "holes") dominant in the high density region and density enhancement events (or "blobs") dominant in the low density region. Two-dimensional cross-conditional averaging shows that the blobs are detached, outward-propagating filamentary structures with a clear dipolar potential while the holes appear to be part of a more extended turbulent structure. A statistical study of the blobs reveals a typical size of ten times the ion sound gyroradius and a typical velocity of one tenth the sound speed. In ET, intermittent turbulence is observed on both the inboard and outboard midplane.

  11. Use of uninformative priors to initialize state estimation for dynamical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.

    2017-10-01

    The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.

  12. Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan

    2017-12-01

    This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.

  13. Bayesian Analysis of a Simple Measurement Model Distinguishing between Types of Information

    NASA Astrophysics Data System (ADS)

    Lira, Ignacio; Grientschnig, Dieter

    2015-12-01

    Let a quantity of interest, Y, be modeled in terms of a quantity X and a set of other quantities Z. Suppose that for Z there is type B information, by which we mean that it leads directly to a joint state-of-knowledge probability density function (PDF) for that set, without reference to likelihoods. Suppose also that for X there is type A information, which signifies that a likelihood is available. The posterior for X is then obtained by updating its prior with said likelihood by means of Bayes' rule, where the prior encodes whatever type B information there may be available for X. If there is no such information, an appropriate non-informative prior should be used. Once the PDFs for X and Z have been constructed, they can be propagated through the measurement model to obtain the PDF for Y, either analytically or numerically. But suppose that, at the same time, there is also information of type A, type B or both types together for the quantity Y. By processing such information in the manner described above we obtain another PDF for Y. Which one is right? Should both PDFs be merged somehow? Is there another way of applying Bayes' rule such that a single PDF for Y is obtained that encodes all existing information? In this paper we examine what we believe should be the proper ways of dealing with such a (not uncommon) situation.

  14. Relation Between Firing Statistics of Spiking Neuron with Delayed Fast Inhibitory Feedback and Without Feedback

    NASA Astrophysics Data System (ADS)

    Vidybida, Alexander; Shchur, Olha

    We consider a class of spiking neuronal models, defined by a set of conditions typical for basic threshold-type models, such as the leaky integrate-and-fire or the binding neuron model and also for some artificial neurons. A neuron is fed with a Poisson process. Each output impulse is applied to the neuron itself after a finite delay Δ. This impulse acts as being delivered through a fast Cl-type inhibitory synapse. We derive a general relation which allows calculating exactly the probability density function (pdf) p(t) of output interspike intervals of a neuron with feedback based on known pdf p0(t) for the same neuron without feedback and on the properties of the feedback line (the Δ value). Similar relations between corresponding moments are derived. Furthermore, we prove that the initial segment of pdf p0(t) for a neuron with a fixed threshold level is the same for any neuron satisfying the imposed conditions and is completely determined by the input stream. For the Poisson input stream, we calculate that initial segment exactly and, based on it, obtain exactly the initial segment of pdf p(t) for a neuron with feedback. That is the initial segment of p(t) is model-independent as well. The obtained expressions are checked by means of Monte Carlo simulation. The course of p(t) has a pronounced peculiarity, which makes it impossible to approximate p(t) by Poisson or another simple stochastic process.

  15. Statistical Orbit Determination using the Particle Filter for Incorporating Non-Gaussian Uncertainties

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell

    2012-01-01

    The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.

  16. Control of Networked Traffic Flow Distribution - A Stochastic Distribution System Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hong; Aziz, H M Abdul; Young, Stan

    Networked traffic flow is a common scenario for urban transportation, where the distribution of vehicle queues either at controlled intersections or highway segments reflect the smoothness of the traffic flow in the network. At signalized intersections, the traffic queues are controlled by traffic signal control settings and effective traffic lights control would realize both smooth traffic flow and minimize fuel consumption. Funded by the Energy Efficient Mobility Systems (EEMS) program of the Vehicle Technologies Office of the US Department of Energy, we performed a preliminary investigation on the modelling and control framework in context of urban network of signalized intersections.more » In specific, we developed a recursive input-output traffic queueing models. The queue formation can be modeled as a stochastic process where the number of vehicles entering each intersection is a random number. Further, we proposed a preliminary B-Spline stochastic model for a one-way single-lane corridor traffic system based on theory of stochastic distribution control.. It has been shown that the developed stochastic model would provide the optimal probability density function (PDF) of the traffic queueing length as a dynamic function of the traffic signal setting parameters. Based upon such a stochastic distribution model, we have proposed a preliminary closed loop framework on stochastic distribution control for the traffic queueing system to make the traffic queueing length PDF follow a target PDF that potentially realizes the smooth traffic flow distribution in a concerned corridor.« less

  17. Monte Carlo method for determining earthquake recurrence parameters from short paleoseismic catalogs: Example calculations for California

    USGS Publications Warehouse

    Parsons, T.

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  18. Picturing Data With Uncertainty

    NASA Technical Reports Server (NTRS)

    Kao, David; Love, Alison; Dungan, Jennifer L.; Pang, Alex

    2004-01-01

    NASA is in the business of creating maps for scientific purposes to represent important biophysical or geophysical quantities over space and time. For example, maps of surface temperature over the globe tell scientists where and when the Earth is heating up; regional maps of the greenness of vegetation tell scientists where and when plants are photosynthesizing. There is always uncertainty associated with each value in any such map due to various factors. When uncertainty is fully modeled, instead of a single value at each map location, there is a distribution expressing a set of possible outcomes at each location. We consider such distribution data as multi-valued data since it consists of a collection of values about a single variable. Thus, a multi-valued data represents both the map and its uncertainty. We have been working on ways to visualize spatial multi-valued data sets effectively for fields with regularly spaced units or grid cells such as those in NASA's Earth science applications. A new way to display distributions at multiple grid locations is to project the distributions from an individual row, column or other user-selectable straight transect from the 2D domain. First at each grid cell in a given slice (row, column or transect), we compute a smooth density estimate from the underlying data. Such a density estimate for the probability density function (PDF) is generally more useful than a histogram, which is a classic density estimate. Then, the collection of PDFs along a given slice are presented vertically above the slice and form a wall. To minimize occlusion of intersecting slices, the corresponding walls are positioned at the far edges of the boundary. The PDF wall depicts the shapes of the distributions very dearly since peaks represent the modes (or bumps) in the PDFs. We've defined roughness as the number of peaks in the distribution. Roughness is another useful summary information for multimodal distributions. The uncertainty of the multi-valued data can also be interpreted by the number of peaks and the widths of the peaks as shown by the PDF walls.

  19. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  20. Relative frequencies of constrained events in stochastic processes: An analytical approach.

    PubMed

    Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

  1. Effect of supersonic molecular-beam injection on edge fluctuation and particle transport in Heliotron J

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zang, L., E-mail: l-zang@center.iae.kyoto-u.ac.jp; Kasajima, K.; Hashimoto, K.

    Edge fluctuation in a supersonic molecular-beam injection (SMBI) fueled plasma has been measured using an electrostatic probe array. After SMBI, the plasma stored energy (W{sub p}) temporarily decreased then started to increase. The local plasma fluctuation and fluctuation induced particle transport before and after SMBI have been analyzed. In a short duration (∼4 ms) just after SMBI, the density fluctuation of broad-band low frequency increased, and the probability density function (PDF) changed from a nearly Gaussian to a positively skewed non-Gaussian one. This suggests that intermittent structures were produced due to SMBI. Also the fluctuation induced particle transport was greatly enhancedmore » during this short duration. About 4 ms after SMBI, the low frequency broad-band density fluctuation decreased, and the PDF returned to a nearly Gaussian shape. Also the fluctuation induced particle transport was reduced. Compared with conventional gas puff, W{sub p} degradation window is very short due to the short injection period of SMBI. After this short degradation window, fluctuation induced particle transport was reduced and W{sub p} started the climbing phase. Therefore, the short period of the influence to the edge fluctuation might be an advantage of this novel fueling technique. On the other hand, although their roles are not identified at present, coherent MHD modes are also suppressed as well by the application of SMBI. These MHD modes are thought to be de-exited due to a sudden change of the edge density and/or excitation conditions.« less

  2. PDF receptor signaling in Caenorhabditis elegans modulates locomotion and egg-laying.

    PubMed

    Meelkop, Ellen; Temmerman, Liesbet; Janssen, Tom; Suetens, Nick; Beets, Isabel; Van Rompay, Liesbeth; Shanmugam, Nilesh; Husson, Steven J; Schoofs, Liliane

    2012-09-25

    In Caenorhabditis elegans, pdfr-1 encodes three receptors of the secretin receptor family. These G protein-coupled receptors are activated by three neuropeptides, pigment dispersing factors 1a, 1b and 2, which are encoded by pdf-1 and pdf-2. We isolated a PDF receptor loss-of-function allele (lst34) by means of a mutagenesis screen and show that the PDF signaling system is involved in locomotion and egg-laying. We demonstrate that the pdfr-1 mutant phenocopies the defective locomotor behavior of the pdf-1 mutant and that pdf-1 and pdf-2 behave antagonistically. All three PDF receptor splice variants are involved in the regulation of locomotor behavior. Cell specific rescue experiments show that this pdf mediated behavior is regulated by neurons rather than body wall muscles. We also show that egg-laying patterns of pdf-1 and pdf-2 mutants are affected, but not those of pdfr-1 mutants, pointing to a novel role for the PDF-system in the regulation of egg-laying. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  3. GW182 controls Drosophila circadian behavior and PDF-Receptor signaling

    PubMed Central

    Zhang, Yong; Emery, Patrick

    2013-01-01

    The neuropeptide PDF is crucial for Drosophila circadian behavior: it keeps circadian neurons synchronized. Here, we identify GW182 as a key regulator of PDF signaling. Indeed, GW182 downregulation results in phenotypes similar to those of Pdf and Pdf-receptor (Pdfr) mutants. gw182 genetically interacts with Pdfr and cAMP signaling, which is essential for PDFR function. GW182 mediates miRNA-dependent gene silencing through its interaction with AGO1. Consistently, GW182's AGO1 interaction domain is required for GW182's circadian function. Moreover, our results indicate that GW182 modulates PDFR signaling by silencing the expression of the cAMP phosphodiesterase DUNCE. Importantly, this repression is under photic control, and GW182 activity level - which is limiting in circadian neurons - influences the responses of the circadian neural network to light. We propose that GW182's gene silencing activity functions as a rheostat for PDFR signaling, and thus profoundly impacts the circadian neural network and its response to environmental inputs. PMID:23583112

  4. Steady-state probability density function of the phase error for a DPLL with an integrate-and-dump device

    NASA Technical Reports Server (NTRS)

    Simon, M.; Mileant, A.

    1986-01-01

    The steady-state behavior of a particular type of digital phase-locked loop (DPLL) with an integrate-and-dump circuit following the phase detector is characterized in terms of the probability density function (pdf) of the phase error in the loop. Although the loop is entirely digital from an implementation standpoint, it operates at two extremely different sampling rates. In particular, the combination of a phase detector and an integrate-and-dump circuit operates at a very high rate whereas the loop update rate is very slow by comparison. Because of this dichotomy, the loop can be analyzed by hybrid analog/digital (s/z domain) techniques. The loop is modeled in such a general fashion that previous analyses of the Real-Time Combiner (RTC), Subcarrier Demodulator Assembly (SDA), and Symbol Synchronization Assembly (SSA) fall out as special cases.

  5. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  6. Divergence of perturbation theory in large scale structures

    NASA Astrophysics Data System (ADS)

    Pajer, Enrico; van der Woude, Drian

    2018-05-01

    We make progress towards an analytical understanding of the regime of validity of perturbation theory for large scale structures and the nature of some non-perturbative corrections. We restrict ourselves to 1D gravitational collapse, for which exact solutions before shell crossing are known. We review the convergence of perturbation theory for the power spectrum, recently proven by McQuinn and White [1], and extend it to non-Gaussian initial conditions and the bispectrum. In contrast, we prove that perturbation theory diverges for the real space two-point correlation function and for the probability density function (PDF) of the density averaged in cells and all the cumulants derived from it. We attribute these divergences to the statistical averaging intrinsic to cosmological observables, which, even on very large and "perturbative" scales, gives non-vanishing weight to all extreme fluctuations. Finally, we discuss some general properties of non-perturbative effects in real space and Fourier space.

  7. Velocity statistics of the Nagel-Schreckenberg model

    NASA Astrophysics Data System (ADS)

    Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael

    2016-02-01

    The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.

  8. Velocity statistics of the Nagel-Schreckenberg model.

    PubMed

    Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael

    2016-02-01

    The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.

  9. Targeted Single-Site MOF Node Modification: Trivalent Metal Loading via Atomic Layer Deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, In Soo; Borycz, Joshua; Platero-Prats, Ana E.

    Postsynthetic functionalization of metal organic frameworks (MOFs) enables the controlled, high-density incorporation of new atoms on a crystallographically precise framework. Leveraging the broad palette of known atomic layer deposition (ALD) chemistries, ALD in MOFs (AIM) is one such targeted approach to construct diverse, highly functional, few-atom clusters. We here demonstrate the saturating reaction of trimethylindium (InMe3) with the node hydroxyls and ligated water of NU-1000, which takes place without significant loss of MOF crystallinity or internal surface area. We computationally identify the elementary steps by which trimethylated trivalent metal compounds (ALD precursors) react with this Zr-based MOF node to generatemore » a uniform and well characterized new surface layer on the node itself, and we predict a final structure that is fully consistent with experimental X-ray pair distribution function (PDF) analysis. We further demonstrate tunable metal loading through controlled number density of the reactive handles (-OH and -OH2) achieved through node dehydration at elevated temperatures.« less

  10. Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions

    NASA Astrophysics Data System (ADS)

    Chen, Nan; Majda, Andrew J.

    2018-02-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.

  11. On the streaming model for redshift-space distortions

    NASA Astrophysics Data System (ADS)

    Kuruvilla, Joseph; Porciani, Cristiano

    2018-06-01

    The streaming model describes the mapping between real and redshift space for 2-point clustering statistics. Its key element is the probability density function (PDF) of line-of-sight pairwise peculiar velocities. Following a kinetic-theory approach, we derive the fundamental equations of the streaming model for ordered and unordered pairs. In the first case, we recover the classic equation while we demonstrate that modifications are necessary for unordered pairs. We then discuss several statistical properties of the pairwise velocities for DM particles and haloes by using a suite of high-resolution N-body simulations. We test the often used Gaussian ansatz for the PDF of pairwise velocities and discuss its limitations. Finally, we introduce a mixture of Gaussians which is known in statistics as the generalised hyperbolic distribution and show that it provides an accurate fit to the PDF. Once inserted in the streaming equation, the fit yields an excellent description of redshift-space correlations at all scales that vastly outperforms the Gaussian and exponential approximations. Using a principal-component analysis, we reduce the complexity of our model for large redshift-space separations. Our results increase the robustness of studies of anisotropic galaxy clustering and are useful for extending them towards smaller scales in order to test theories of gravity and interacting dark-energy models.

  12. Anomalous fluctuations of vertical velocity of Earth and their possible implications for earthquakes.

    PubMed

    Manshour, Pouya; Ghasemi, Fatemeh; Matsumoto, T; Gómez, J; Sahimi, Muhammad; Peinke, J; Pacheco, A F; Tabar, M Reza Rahimi

    2010-09-01

    High-quality measurements of seismic activities around the world provide a wealth of data and information that are relevant to understanding of when earthquakes may occur. If viewed as complex stochastic time series, such data may be analyzed by methods that provide deeper insights into their nature, hence leading to better understanding of the data and their possible implications for earthquakes. In this paper, we provide further evidence for our recent proposal [P. Mansour, Phys. Rev. Lett. 102, 014101 (2009)10.1103/PhysRevLett.102.014101] for the existence of a transition in the shape of the probability density function (PDF) of the successive detrended increments of the stochastic fluctuations of Earth's vertical velocity V_{z} , collected by broadband stations before moderate and large earthquakes. To demonstrate the transition, we carried out extensive analysis of the data for V_{z} for 12 earthquakes in several regions around the world, including the recent catasrophic one in Haiti. The analysis supports the hypothesis that before and near the time of an earthquake, the shape of the PDF undergoes significant and discernable changes, which can be characterized quantitatively. The typical time over which the PDF undergoes the transition is about 5-10 h prior to a moderate or large earthquake.

  13. A method for approximating acoustic-field-amplitude uncertainty caused by environmental uncertainties.

    PubMed

    James, Kevin R; Dowling, David R

    2008-09-01

    In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.

  14. Multiscaling properties of coastal waters particle size distribution from LISST in situ measurements

    NASA Astrophysics Data System (ADS)

    Pannimpullath Remanan, R.; Schmitt, F. G.; Loisel, H.; Mériaux, X.

    2013-12-01

    An eulerian high frequency sampling of particle size distribution (PSD) is performed during 5 tidal cycles (65 hours) in a coastal environment of the eastern English Channel at 1 Hz. The particle data are recorded using a LISST-100x type C (Laser In Situ Scattering and Transmissometry, Sequoia Scientific), recording volume concentrations of particles having diameters ranging from 2.5 to 500 mu in 32 size classes in logarithmic scale. This enables the estimation at each time step (every second) of the probability density function of particle sizes. At every time step, the pdf of PSD is hyperbolic. We can thus estimate PSD slope time series. Power spectral analysis shows that the mean diameter of the suspended particles is scaling at high frequencies (from 1s to 1000s). The scaling properties of particle sizes is studied by computing the moment function, from the pdf of the size distribution. Moment functions at many different time scales (from 1s to 1000 s) are computed and their scaling properties considered. The Shannon entropy at each time scale is also estimated and is related to other parameters. The multiscaling properties of the turbidity (coefficient cp computed from the LISST) are also consider on the same time scales, using Empirical Mode Decomposition.

  15. Global warming precipitation accumulation increases above the current-climate cutoff scale

    PubMed Central

    Sahany, Sandeep; Stechmann, Samuel N.; Bernstein, Diana N.

    2017-01-01

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff. PMID:28115693

  16. Global warming precipitation accumulation increases above the current-climate cutoff scale

    NASA Astrophysics Data System (ADS)

    Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.; Bernstein, Diana N.

    2017-02-01

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.

  17. Global warming precipitation accumulation increases above the current-climate cutoff scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing withmore » event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.« less

  18. Global warming precipitation accumulation increases above the current-climate cutoff scale.

    PubMed

    Neelin, J David; Sahany, Sandeep; Stechmann, Samuel N; Bernstein, Diana N

    2017-02-07

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.

  19. Global warming precipitation accumulation increases above the current-climate cutoff scale

    DOE PAGES

    Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.; ...

    2017-01-23

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing withmore » event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.« less

  20. Modeling the Lyα Forest in Collisionless Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorini, Daniele; Oñorbe, José; Lukić, Zarija

    2016-08-11

    Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present in this paper "Iteratively Matched Statistics" (IMS), a novel method to accurately model the Lyα forest with collisionless N-body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) andmore » the power spectrum of the real-space Lyα forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N-body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Lyα forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N-body simulations with achievable mean inter-particle separations in large-volume simulations. Finally, in addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic "mock" skies for Lyα forest surveys.« less

  1. MODELING THE Ly α FOREST IN COLLISIONLESS SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorini, Daniele; Oñorbe, José; Hennawi, Joseph F.

    2016-08-20

    Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present “Iteratively Matched Statistics” (IMS), a novel method to accurately model the Ly α forest with collisionless N -body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) and themore » power spectrum of the real-space Ly α forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N -body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Ly α forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N -body simulations with achievable mean inter-particle separations in large-volume simulations. In addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic “mock” skies for Ly α forest surveys.« less

  2. Modeling turbulent/chemistry interactions using assumed pdf methods

    NASA Technical Reports Server (NTRS)

    Gaffney, R. L, Jr.; White, J. A.; Girimaji, S. S.; Drummond, J. P.

    1992-01-01

    Two assumed probability density functions (pdfs) are employed for computing the effect of temperature fluctuations on chemical reaction. The pdfs assumed for this purpose are the Gaussian and the beta densities of the first kind. The pdfs are first used in a parametric study to determine the influence of temperature fluctuations on the mean reaction-rate coefficients. Results indicate that temperature fluctuations significantly affect the magnitude of the mean reaction-rate coefficients of some reactions depending on the mean temperature and the intensity of the fluctuations. The pdfs are then tested on a high-speed turbulent reacting mixing layer. Results clearly show a decrease in the ignition delay time due to increases in the magnitude of most of the mean reaction rate coefficients.

  3. Model Considerations for Memory-based Automatic Music Transcription

    NASA Astrophysics Data System (ADS)

    Albrecht, Štěpán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  4. Center for Modeling of Turbulence and Transition (CMOTT): Research Briefs, 1992

    NASA Technical Reports Server (NTRS)

    Liou, William W. (Editor)

    1992-01-01

    The progress is reported of the Center for Modeling of Turbulence and Transition (CMOTT). The main objective of the CMOTT is to develop, validate and implement the turbulence and transition models for practical engineering flows. The flows of interest are three-dimensional, incompressible and compressible flows with chemical reaction. The research covers two-equation (e.g., k-e) and algebraic Reynolds-stress models, second moment closure models, probability density function (pdf) models, Renormalization Group Theory (RNG), Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS).

  5. Engineering Design Handbook. Maintainability Engineering Theory and Practice

    DTIC Science & Technology

    1976-01-01

    5—46 5—8.4.1.1 Human Body Measurement ( Anthropometry ) . 5—46 5-8.4.1.2 Man’s Sensory Capability and Psychological Makeup 5-46 5—8.4.1.3...Availability of System With Maintenance Time Ratio 1:4 2-32 2—9 Average and Pointwise Availability 2—34 2—10 Hypothetical...density function ( pdf ) of the normal distribution (Ref. 22, Chapter 10, and Ref. 23, Chapter 1) has the equation where cr is the standard deviation of

  6. Density functional theory versus quantum Monte Carlo simulations of Fermi gases in the optical-lattice arena★

    NASA Astrophysics Data System (ADS)

    Pilati, Sebastiano; Zintchenko, Ilia; Troyer, Matthias; Ancilotto, Francesco

    2018-04-01

    We benchmark the ground state energies and the density profiles of atomic repulsive Fermi gases in optical lattices (OLs) computed via density functional theory (DFT) against the results of diffusion Monte Carlo (DMC) simulations. The main focus is on a half-filled one-dimensional OLs, for which the DMC simulations performed within the fixed-node approach provide unbiased results. This allows us to demonstrate that the local spin-density approximation (LSDA) to the exchange-correlation functional of DFT is very accurate in the weak and intermediate interactions regime, and also to underline its limitations close to the strongly-interacting Tonks-Girardeau limit and in very deep OLs. We also consider a three-dimensional OL at quarter filling, showing also in this case the high accuracy of the LSDA in the moderate interaction regime. The one-dimensional data provided in this study may represent a useful benchmark to further develop DFT methods beyond the LSDA and they will hopefully motivate experimental studies to accurately measure the equation of state of Fermi gases in higher-dimensional geometries. Supplementary material in the form of one pdf file available from the Journal web page at http://https://doi.org/10.1140/epjb/e2018-90021-1.

  7. Blocking endocytosis in Drosophila's circadian pacemaker neurons interferes with the endogenous clock in a PDF-dependent way.

    PubMed

    Wülbeck, Corinna; Grieshaber, Eva; Helfrich-Förster, Charlotte

    2009-10-01

    The neuropeptide pigment-dispersing factor (PDF) plays an essential role in the circadian clock of the fruit fly Drosophila melanogaster, but many details of PDF signaling in the clock network are still unknown. We tried to interfere with PDF signaling by blocking the GTPase Shibire in PDF neurons. Shibire is an ortholog of the mammalian Dynamins and is essential for endocytosis of clathrin-coated vesicles at the plasma membrane. Such endocytosis is used for neurotransmitter reuptake by presynaptic neurons, which is a prerequisite of synaptic vesicle recycling, and receptor-mediated endocytosis in the postsynaptic neuron, which leads to signal termination. By blocking Shibire function via overexpression of a dominant negative mutant form of Shibire in PDF neurons, we slowed down the behavioral rhythm by 3 h. This effect was absent in PDF receptor null mutants, indicating that we interfered with PDF receptor-mediated endocytosis. Because we obtained similar behavioral phenotypes by increasing the PDF level in regions close to PDF neurons, we conclude that blocking Shibire did prolong PDF signaling in the neurons that respond to PDF. Obviously, terminating the PDF signaling via receptor-mediated endocytosis is a crucial step in determining the period of behavioral rhythms.

  8. An easy and effective approach to manage radiologic portable document format (PDF) files using iTunes.

    PubMed

    Qian, Li Jun; Zhou, Mi; Xu, Jian Rong

    2008-07-01

    The objective of this article is to explain an easy and effective approach for managing radiologic files in portable document format (PDF) using iTunes. PDF files are widely used as a standard file format for electronic publications as well as for medical online documents. Unfortunately, there is a lack of powerful software to manage numerous PDF documents. In this article, we explain how to use the hidden function of iTunes (Apple Computer) to manage PDF documents as easily as managing music files.

  9. Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation

    NASA Technical Reports Server (NTRS)

    Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet

    2015-01-01

    When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating each component weight during the nonlinear propagation stage an approximation of the true pdf can be successfully reconstructed. Particle filtering (PF) methods have gained popularity recently for solving nonlinear estimation problems due to their straightforward approach and the processing capabilities mentioned above. The basic concept behind PF is to represent any pdf as a set of random samples. As the number of samples increases, they will theoretically converge to the exact, equivalent representation of the desired pdf. When the estimated qth moment is needed, the samples are used for its construction allowing further analysis of the pdf characteristics. However, filter performance deteriorates as the dimension of the state vector increases. To overcome this problem Ref. [5] applies a marginalization technique for PF methods, decreasing complexity of the system to one linear and another nonlinear state estimation problem. The marginalization theory was originally developed by Rao and Blackwell independently. According to Ref. [6] it improves any given estimator under every convex loss function. The improvement comes from calculating a conditional expected value, often involving integrating out a supportive statistic. In other words, Rao-Blackwellization allows for smaller but separate computations to be carried out while reaching the main objective of the estimator. In the case of improving an estimator's variance, any supporting statistic can be removed and its variance determined. Next, any other information that dependents on the supporting statistic is found along with its respective variance. A new approach is developed here by utilizing the strengths of the adaptive Gaussian sum propagation in Ref. [2] and a marginalization approach used for PF methods found in Ref. [7]. In the following sections a modified filtering approach is presented based on a special state-space model within nonlinear systems to reduce the dimensionality of the optimization problem in Ref. [2]. First, the adaptive Gaussian sum propagation is explained and then the new marginalized adaptive Gaussian sum propagation is derived. Finally, an example simulation is presented.

  10. Investigation of seismicity after the initiation of a Seismic Electric Signal activity until the main shock

    PubMed Central

    Sarlis, N. V.; Skordas, E. S.; Lazaridou, M. S.; Varotsos, P. A.

    2008-01-01

    The behavior of seismicity in the area candidate to suffer a main shock is investigated after the observation of the Seismic Electric Signal activity until the impending main shock. This is based on the view that the occurrence of earthquakes is a critical phenomenon to which statistical dynamics may be applied. In the present work, analysing the time series of small earthquakes, the concept of natural time χ was used and the results revealed that the approach to criticality itself can be manifested by the probability density function (PDF) of κ1 calculated over an appropriate statistical ensemble. Here, κ1 is the variance κ1(= 〈χ2〉 − 〈χ〉2) resulting from the power spectrum of a function defined as Φ(ω)=∑k=1Npkexp(iωχk), where pk is the normalized energy of the k-th small earthquake and ω the natural frequency. This PDF exhibits a maximum at κ1 ≈ 0.070 a few days before the main shock. Examples are presented, referring to the magnitude 6∼7 class earthquakes that occurred in Greece. PMID:18941306

  11. Modeling non-Fickian dispersion by use of the velocity PDF on the pore scale

    NASA Astrophysics Data System (ADS)

    Kooshapur, Sheema; Manhart, Michael

    2015-04-01

    For obtaining a description of reactive flows in porous media, apart from the geometrical complications of resolving the velocities and scalar values, one has to deal with the additional reactive term in the transport equation. An accurate description of the interface of the reacting fluids - which is strongly influenced by dispersion- is essential for resolving this term. In REV-based simulations the reactive term needs to be modeled taking sub-REV fluctuations and possibly non-Fickian dispersion into account. Non-Fickian dispersion has been observed in strongly heterogeneous domains and in early phases of transport. A fully resolved solution of the Navier-Stokes and transport equations which yields a detailed description of the flow properties, dispersion, interfaces of fluids, etc. however, is not practical for domains containing more than a few thousand grains, due to the huge computational effort required. Through Probability Density Function (PDF) based methods, the velocity distribution in the pore space can facilitate the understanding and modelling of non-Fickian dispersion [1,2]. Our aim is to model the transition between non-Fickian and Fickian dispersion in a random sphere pack within the framework of a PDF based transport model proposed by Meyer and Tchelepi [1,3]. They proposed a stochastic transport model where velocity components of tracer particles are represented by a continuous Markovian stochastic process. In addition to [3], we consider the effects of pore scale diffusion and formulate a different stochastic equation for the increments in velocity space from first principles. To assess the terms in this equation, we performed Direct Numerical Simulations (DNS) for solving the Navier-Stokes equation on a random sphere pack. We extracted the PDFs and statistical moments (up to the 4th moment) of the stream-wise velocity, u, and first and second order velocity derivatives both independent and conditioned on velocity. By using this data and combining the Taylor expansion of velocity increments, du, and the Langevin equation for point particles we obtained the components of velocity fluxes which point to a drift and diffusion behavior in the velocity space. Thus a partial differential equation for the velocity PDF has been formulated that constitutes an advection-diffusion equation in velocity space (a Fokker-Planck equation) in which the drift and diffusion coefficients are obtained using the velocity conditioned statistics of the derivatives of the pore scale velocity field. This has been solved by both a Random Walk (RW) model and a Finite Volume method. We conclude that both, these methods are able to simulate the velocity PDF obtained by DNS. References [1] D. W. Meyer, P. Jenny, H.A.Tschelepi, A joint velocity-concentration PDF method for traqcer flow in heterogeneous porous media, Water Resour.Res., 46, W12522, (2010). [2] Nowak, W., R. L. Schwede, O. A. Cirpka, and I. Neuweiler, Probability density functions of hydraulic head and velocity in three-dimensional heterogeneous porous media, Water Resour.Res., 44, W08452, (2008) [3] D. W. Meyer, H. A. Tchelepi, Particle-based transport model with Markovian velocity processes for tracer dispersion in highly heterogeneous porous media, Water Resour. Res., 46, W11552, (2010)

  12. Total-scattering pair-distribution function of organic material from powder electron diffraction data.

    PubMed

    Gorelik, Tatiana E; Schmidt, Martin U; Kolb, Ute; Billinge, Simon J L

    2015-04-01

    This paper shows that pair-distribution function (PDF) analyses can be carried out on organic and organometallic compounds from powder electron diffraction data. Different experimental setups are demonstrated, including selected area electron diffraction and nanodiffraction in transmission electron microscopy or nanodiffraction in scanning transmission electron microscopy modes. The methods were demonstrated on organometallic complexes (chlorinated and unchlorinated copper phthalocyanine) and on purely organic compounds (quinacridone). The PDF curves from powder electron diffraction data, called ePDF, are in good agreement with PDF curves determined from X-ray powder data demonstrating that the problems of obtaining kinematical scattering data and avoiding beam damage of the sample are possible to resolve.

  13. Dispersion of a Passive Scalar Fluctuating Plume in a Turbulent Boundary Layer. Part III: Stochastic Modelling

    NASA Astrophysics Data System (ADS)

    Marro, Massimo; Salizzoni, Pietro; Soulhac, Lionel; Cassiani, Massimo

    2018-06-01

    We analyze the reliability of the Lagrangian stochastic micromixing method in predicting higher-order statistics of the passive scalar concentration induced by an elevated source (of varying diameter) placed in a turbulent boundary layer. To that purpose we analyze two different modelling approaches by testing their results against the wind-tunnel measurements discussed in Part I (Nironi et al., Boundary-Layer Meteorology, 2015, Vol. 156, 415-446). The first is a probability density function (PDF) micromixing model that simulates the effects of the molecular diffusivity on the concentration fluctuations by taking into account the background particles. The second is a new model, named VPΓ, conceived in order to minimize the computational costs. This is based on the volumetric particle approach providing estimates of the first two concentration moments with no need for the simulation of the background particles. In this second approach, higher-order moments are computed based on the estimates of these two moments and under the assumption that the concentration PDF is a Gamma distribution. The comparisons concern the spatial distribution of the first four moments of the concentration and the evolution of the PDF along the plume centreline. The novelty of this work is twofold: (i) we perform a systematic comparison of the results of micro-mixing Lagrangian models against experiments providing profiles of the first four moments of the concentration within an inhomogeneous and anisotropic turbulent flow, and (ii) we show the reliability of the VPΓ model as an operational tool for the prediction of the PDF of the concentration.

  14. Dispersion of a Passive Scalar Fluctuating Plume in a Turbulent Boundary Layer. Part III: Stochastic Modelling

    NASA Astrophysics Data System (ADS)

    Marro, Massimo; Salizzoni, Pietro; Soulhac, Lionel; Cassiani, Massimo

    2018-01-01

    We analyze the reliability of the Lagrangian stochastic micromixing method in predicting higher-order statistics of the passive scalar concentration induced by an elevated source (of varying diameter) placed in a turbulent boundary layer. To that purpose we analyze two different modelling approaches by testing their results against the wind-tunnel measurements discussed in Part I (Nironi et al., Boundary-Layer Meteorology, 2015, Vol. 156, 415-446). The first is a probability density function (PDF) micromixing model that simulates the effects of the molecular diffusivity on the concentration fluctuations by taking into account the background particles. The second is a new model, named VPΓ, conceived in order to minimize the computational costs. This is based on the volumetric particle approach providing estimates of the first two concentration moments with no need for the simulation of the background particles. In this second approach, higher-order moments are computed based on the estimates of these two moments and under the assumption that the concentration PDF is a Gamma distribution. The comparisons concern the spatial distribution of the first four moments of the concentration and the evolution of the PDF along the plume centreline. The novelty of this work is twofold: (i) we perform a systematic comparison of the results of micro-mixing Lagrangian models against experiments providing profiles of the first four moments of the concentration within an inhomogeneous and anisotropic turbulent flow, and (ii) we show the reliability of the VPΓ model as an operational tool for the prediction of the PDF of the concentration.

  15. Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.

    PubMed

    Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L

    2008-04-01

    The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.

  16. Analytical evaluation of the combined influence of polarization mode dispersion and group velocity dispersion on the bit error rate performance of optical homodyne quadrature phase-shift keying systems

    NASA Astrophysics Data System (ADS)

    Taher, Kazi Abu; Majumder, Satya Prasad

    2017-12-01

    A theoretical approach is presented to evaluate the bit error rate (BER) performance of an optical fiber transmission system with quadrature phase-shift keying (QPSK) modulation under the combined influence of polarization mode dispersion (PMD) and group velocity dispersion (GVD) in a single-mode fiber (SMF). The analysis is carried out without and with polarization division multiplexed (PDM) transmission considering a coherent homodyne receiver. The probability density function (pdf) of the random phase fluctuations due to PMD and GVD at the output of the receiver is determined analytically, considering the pdf of differential group delay (DGD) to be Maxwellian distribution and that of GVD to be Gaussian approximation. The exact pdf of the phase fluctuation due to PMD and GVD is also evaluated from its moments using a Monte Carlo simulation technique. Average BER is evaluated by averaging the conditional BER over the pdf of the random phase fluctuation. The BER performance results are evaluated for different system parameters. It is found that PDM-QPSK coherent homodyne system suffers more power penalty than the homodyne QPSK system without PDM. A PDM-QPSK system suffers a penalty of 4.3 dB whereas power penalty of QPSK system is 3.0 dB at a BER of 10-9 for DGD of 0.8 Tb and GVD of 1700 ps/nm. Analytical results are compared with the experimental results reported earlier and found to have good conformity.

  17. Deformation in metallic glasses studied by synchrotron x-ray diffraction

    DOE PAGES

    Dmowski, Wojciech; Egami, Takeshi; Tong, Yang

    2016-01-11

    In this study, high mechanical strength is one of the superior properties of metallic glasses which render them promising as a structural material. However, understanding the process of mechanical deformation in strongly disordered matter, such as metallic glass, is exceedingly difficult because even an effort to describe the structure qualitatively is hampered by the absence of crystalline periodicity. In spite of such challenges, we demonstrate that high-energy synchrotron X-ray diffraction measurement under stress, using a two-dimensional detector coupled with the anisotropic pair-density function (PDF) analysis, has greatly facilitated the effort of unraveling complex atomic rearrangements involved in the elastic, anelastic,more » and plastic deformation of metallic glasses. Even though PDF only provides information on the correlation between two atoms and not on many-body correlations, which are often necessary in elucidating various properties, by using stress as means of exciting the system we can garner rich information on the nature of the atomic structure and local atomic rearrangements during deformation in glasses.« less

  18. Statistical Nature of Atomic Disorder in Irradiated Crystals.

    PubMed

    Boulle, A; Debelle, A

    2016-06-17

    Atomic disorder in irradiated materials is investigated by means of x-ray diffraction, using cubic SiC single crystals as a model material. It is shown that, besides the determination of depth-resolved strain and damage profiles, x-ray diffraction can be efficiently used to determine the probability density function (PDF) of the atomic displacements within the crystal. This task is achieved by analyzing the diffraction-order dependence of the damage profiles. We thereby demonstrate that atomic displacements undergo Lévy flights, with a displacement PDF exhibiting heavy tails [with a tail index in the γ=0.73-0.37 range, i.e., far from the commonly assumed Gaussian case (γ=2)]. It is further demonstrated that these heavy tails are crucial to account for the amorphization kinetics in SiC. From the retrieved displacement PDFs we introduce a dimensionless parameter f_{D}^{XRD} to quantify the disordering. f_{D}^{XRD} is found to be consistent with both independent measurements using ion channeling and with molecular dynamics calculations.

  19. Laser-diagnostic mapping of temperature and soot statistics in a 2-m diameter turbulent pool fire

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kearney, Sean P.; Grasser, Thomas W.

    We present spatial profiles of temperature and soot-volume-fraction statistics from a sooting 2-m base diameter turbulent pool fire, burning a 10%-toluene / 90%-methanol fuel mixture. Dual-pump coherent anti-Stokes Raman scattering and laser-induced incandescence are utilized to obtain radial profiles of temperature and soot probability density functions (pdf) as well as estimates of temperature/soot joint statistics at three vertical heights above the surface of the methanol/toluene fuel pool. Results are presented both in the fuel vapor-dome region at ¼ base diameter and in the actively burning region at ½ and ¾ diameters above the fuel surface. The spatial evolution of themore » soot and temperature pdfs is discussed and profiles of the temperature and soot mean and rms statistics are provided. Joint temperature/soot statistics are presented as spatially resolved conditional averages across the fire plume, and in terms of a joint pdf obtained by including measurements from multiple spatial locations.« less

  20. Laser-diagnostic mapping of temperature and soot statistics in a 2-m diameter turbulent pool fire

    DOE PAGES

    Kearney, Sean P.; Grasser, Thomas W.

    2017-08-10

    We present spatial profiles of temperature and soot-volume-fraction statistics from a sooting 2-m base diameter turbulent pool fire, burning a 10%-toluene / 90%-methanol fuel mixture. Dual-pump coherent anti-Stokes Raman scattering and laser-induced incandescence are utilized to obtain radial profiles of temperature and soot probability density functions (pdf) as well as estimates of temperature/soot joint statistics at three vertical heights above the surface of the methanol/toluene fuel pool. Results are presented both in the fuel vapor-dome region at ¼ base diameter and in the actively burning region at ½ and ¾ diameters above the fuel surface. The spatial evolution of themore » soot and temperature pdfs is discussed and profiles of the temperature and soot mean and rms statistics are provided. Joint temperature/soot statistics are presented as spatially resolved conditional averages across the fire plume, and in terms of a joint pdf obtained by including measurements from multiple spatial locations.« less

  1. The validity of multiphase DNS initialized on the basis of single--point statistics

    NASA Astrophysics Data System (ADS)

    Subramaniam, Shankar

    1999-11-01

    A study of the point--process statistical representation of a spray reveals that single--point statistical information contained in the droplet distribution function (ddf) is related to a sequence of single surrogate--droplet pdf's, which are in general different from the physical single--droplet pdf's. The results of this study have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single--point statistics such as the average number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets.

  2. CFD modeling using PDF approach for investigating the flame length in rotary kilns

    NASA Astrophysics Data System (ADS)

    Elattar, H. F.; Specht, E.; Fouda, A.; Bin-Mahfouz, Abdullah S.

    2016-12-01

    Numerical simulations using computational fluid dynamics (CFD) are performed to investigate the flame length characteristics in rotary kilns using probability density function (PDF) approach. A commercial CFD package (ANSYS-Fluent) is employed for this objective. A 2-D axisymmetric model is applied to study the effect of both operating and geometric parameters of rotary kiln on the characteristics of the flame length. Three types of gaseous fuel are used in the present work; methane (CH4), carbon monoxide (CO) and biogas (50 % CH4 + 50 % CO2). Preliminary comparison study of 2-D modeling outputs of free jet flames with available experimental data is carried out to choose and validate the proper turbulence model for the present numerical simulations. The results showed that the excess air number, diameter of kiln air entrance, radiation modeling consideration and fuel type have remarkable effects on the flame length characteristics. Numerical correlations for the rotary kiln flame length are presented in terms of the studied kiln operating and geometric parameters within acceptable error.

  3. Consideration effect of wind farms on the network reconfiguration in the distribution systems in an uncertain environment

    NASA Astrophysics Data System (ADS)

    Rahmani, Kianoosh; Kavousifard, Farzaneh; Abbasi, Alireza

    2017-09-01

    This article proposes a novel probabilistic Distribution Feeder Reconfiguration (DFR) based method to consider the uncertainty impacts into account with high accuracy. In order to achieve the set aim, different scenarios are generated to demonstrate the degree of uncertainty in the investigated elements which are known as the active and reactive load consumption and the active power generation of the wind power units. Notably, a normal Probability Density Function (PDF) based on the desired accuracy is divided into several class intervals for each uncertain parameter. Besides, the Weiball PDF is utilised for modelling wind generators and taking the variation impacts of the power production in wind generators. The proposed problem is solved based on Fuzzy Adaptive Modified Particle Swarm Optimisation to find the most optimal switching scheme during the Multi-objective DFR. Moreover, this paper holds two suggestions known as new mutation methods to adjust the inertia weight of PSO by the fuzzy rules to enhance its ability in global searching within the entire search space.

  4. Quantum resonant activation.

    PubMed

    Magazzù, Luca; Hänggi, Peter; Spagnolo, Bernardo; Valenti, Davide

    2017-04-01

    Quantum resonant activation is investigated for the archetype setup of an externally driven two-state (spin-boson) system subjected to strong dissipation by means of both analytical and extensive numerical calculations. The phenomenon of resonant activation emerges in the presence of either randomly fluctuating or deterministic periodically varying driving fields. Addressing the incoherent regime, a characteristic minimum emerges in the mean first passage time to reach an absorbing neighboring state whenever the intrinsic time scale of the modulation matches the characteristic time scale of the system dynamics. For the case of deterministic periodic driving, the first passage time probability density function (pdf) displays a complex, multipeaked behavior, which depends crucially on the details of initial phase, frequency, and strength of the driving. As an interesting feature we find that the mean first passage time enters the resonant activation regime at a critical frequency ν^{*} which depends very weakly on the strength of the driving. Moreover, we provide the relation between the first passage time pdf and the statistics of residence times.

  5. Effects of numerical dissipation and unphysical excursions on scalar-mixing estimates in large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Sharan, Nek; Matheou, Georgios; Dimotakis, Paul

    2017-11-01

    Artificial numerical dissipation decreases dispersive oscillations and can play a key role in mitigating unphysical scalar excursions in large eddy simulations (LES). Its influence on scalar mixing can be assessed through the resolved-scale scalar, Z , its probability density function (PDF), variance, spectra, and the budget of the horizontally averaged equation for Z2. LES of incompressible temporally evolving shear flow enabled us to study the influence of numerical dissipation on unphysical scalar excursions and mixing estimates. Flows with different mixing behavior, with both marching and non-marching scalar PDFs, are studied. Scalar fields for each flow are compared for different grid resolutions and numerical scalar-convection term schemes. As expected, increasing numerical dissipation enhances scalar mixing in the development stage of shear flow characterized by organized large-scale pairings with a non-marching PDF, but has little influence in the self-similar stage of flows with marching PDFs. Flow parameters and regimes sensitive to numerical dissipation help identify approaches to mitigate unphysical excursions while minimizing dissipation.

  6. Statistical Nature of Atomic Disorder in Irradiated Crystals

    NASA Astrophysics Data System (ADS)

    Boulle, A.; Debelle, A.

    2016-06-01

    Atomic disorder in irradiated materials is investigated by means of x-ray diffraction, using cubic SiC single crystals as a model material. It is shown that, besides the determination of depth-resolved strain and damage profiles, x-ray diffraction can be efficiently used to determine the probability density function (PDF) of the atomic displacements within the crystal. This task is achieved by analyzing the diffraction-order dependence of the damage profiles. We thereby demonstrate that atomic displacements undergo Lévy flights, with a displacement PDF exhibiting heavy tails [with a tail index in the γ =0.73 - 0.37 range, i.e., far from the commonly assumed Gaussian case (γ =2 )]. It is further demonstrated that these heavy tails are crucial to account for the amorphization kinetics in SiC. From the retrieved displacement PDFs we introduce a dimensionless parameter fDXRD to quantify the disordering. fDXRD is found to be consistent with both independent measurements using ion channeling and with molecular dynamics calculations.

  7. Multiscale understanding of tricalcium silicate hydration reactions.

    PubMed

    Cuesta, Ana; Zea-Garcia, Jesus D; Londono-Zuluaga, Diana; De la Torre, Angeles G; Santacruz, Isabel; Vallcorba, Oriol; Dapiaggi, Monica; Sanfélix, Susana G; Aranda, Miguel A G

    2018-06-04

    Tricalcium silicate, the main constituent of Portland cement, hydrates to produce crystalline calcium hydroxide and calcium-silicate-hydrates (C-S-H) nanocrystalline gel. This hydration reaction is poorly understood at the nanoscale. The understanding of atomic arrangement in nanocrystalline phases is intrinsically complicated and this challenge is exacerbated by the presence of additional crystalline phase(s). Here, we use calorimetry and synchrotron X-ray powder diffraction to quantitatively follow tricalcium silicate hydration process: i) its dissolution, ii) portlandite crystallization and iii) C-S-H gel precipitation. Chiefly, synchrotron pair distribution function (PDF) allows to identify a defective clinotobermorite, Ca 11 Si 9 O 28 (OH) 2 . 8.5H 2 O, as the nanocrystalline component of C-S-H. Furthermore, PDF analysis also indicates that C-S-H gel contains monolayer calcium hydroxide which is stretched as recently predicted by first principles calculations. These outcomes, plus additional laboratory characterization, yielded a multiscale picture for C-S-H nanocomposite gel which explains the observed densities and Ca/Si atomic ratios at the nano- and meso- scales.

  8. Stochastic bifurcation in a model of love with colored noise

    NASA Astrophysics Data System (ADS)

    Yue, Xiaokui; Dai, Honghua; Yuan, Jianping

    2015-07-01

    In this paper, we wish to examine the stochastic bifurcation induced by multiplicative Gaussian colored noise in a dynamical model of love where the random factor is used to describe the complexity and unpredictability of psychological systems. First, the dynamics in deterministic love-triangle model are considered briefly including equilibrium points and their stability, chaotic behaviors and chaotic attractors. Then, the influences of Gaussian colored noise with different parameters are explored such as the phase plots, top Lyapunov exponents, stationary probability density function (PDF) and stochastic bifurcation. The stochastic P-bifurcation through a qualitative change of the stationary PDF will be observed and bifurcation diagram on parameter plane of correlation time and noise intensity is presented to find the bifurcation behaviors in detail. Finally, the top Lyapunov exponent is computed to determine the D-bifurcation when the noise intensity achieves to a critical value. By comparison, we find there is no connection between two kinds of stochastic bifurcation.

  9. SUGGEL: A Program Suggesting the Orbital Angular Momentum of a Neutron Resonance from the Magnitude of its Neutron Width

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, S.Y.

    2001-02-02

    The SUGGEL computer code has been developed to suggest a value for the orbital angular momentum of a neutron resonance that is consistent with the magnitude of its neutron width. The suggestion is based on the probability that a resonance having a certain value of g{Gamma}{sub n} is an l-wave resonance. The probability is calculated by using Bayes' theorem on the conditional probability. The probability density functions (pdf's) of g{Gamma}{sub n} for up to d-wave (l=2) have been derived from the {chi}{sup 2} distribution of Porter and Thomas. The pdf's take two possible channel spins into account. This code ismore » a tool which evaluators will use to construct resonance parameters and help to assign resonance spin. The use of this tool is expected to reduce time and effort in the evaluation procedure, since the number of repeated runs of the fitting code (e.g., SAMMY) may be reduced.« less

  10. Creation of the BMA ensemble for SST using a parallel processing technique

    NASA Astrophysics Data System (ADS)

    Kim, Kwangjin; Lee, Yang Won

    2013-10-01

    Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.

  11. Quantum resonant activation

    NASA Astrophysics Data System (ADS)

    Magazzó, Luca; Hänggi, Peter; Spagnolo, Bernardo; Valenti, Davide

    2017-04-01

    Quantum resonant activation is investigated for the archetype setup of an externally driven two-state (spin-boson) system subjected to strong dissipation by means of both analytical and extensive numerical calculations. The phenomenon of resonant activation emerges in the presence of either randomly fluctuating or deterministic periodically varying driving fields. Addressing the incoherent regime, a characteristic minimum emerges in the mean first passage time to reach an absorbing neighboring state whenever the intrinsic time scale of the modulation matches the characteristic time scale of the system dynamics. For the case of deterministic periodic driving, the first passage time probability density function (pdf) displays a complex, multipeaked behavior, which depends crucially on the details of initial phase, frequency, and strength of the driving. As an interesting feature we find that the mean first passage time enters the resonant activation regime at a critical frequency ν* which depends very weakly on the strength of the driving. Moreover, we provide the relation between the first passage time pdf and the statistics of residence times.

  12. Remote control of renal physiology by the intestinal neuropeptide pigment-dispersing factor in Drosophila.

    PubMed

    Talsma, Aaron D; Christov, Christo P; Terriente-Felix, Ana; Linneweber, Gerit A; Perea, Daniel; Wayland, Matthew; Shafer, Orie T; Miguel-Aliaga, Irene

    2012-07-24

    The role of the central neuropeptide pigment-dispersing factor (PDF) in circadian timekeeping in Drosophila is remarkably similar to that of vasoactive intestinal peptide (VIP) in mammals. Like VIP, PDF is expressed outside the circadian network by neurons innervating the gut, but the function and mode of action of this PDF have not been characterized. Here we investigate the visceral roles of PDF by adapting cellular and physiological methods to the study of visceral responses to PDF signaling in wild-type and mutant genetic backgrounds. We find that intestinal PDF acts at a distance on the renal system, where it regulates ureter contractions. We show that PdfR, PDF's established receptor, is expressed by the muscles of the excretory system, and present evidence that PdfR-induced cAMP increases underlie the myotropic effects of PDF. These findings extend the similarities between PDF and VIP beyond their shared central role as circadian regulators, and uncover an unexpected endocrine mode of myotropic action for an intestinal neuropeptide on the renal system.

  13. Remote control of renal physiology by the intestinal neuropeptide pigment-dispersing factor in Drosophila

    PubMed Central

    Talsma, Aaron D.; Christov, Christo P.; Terriente-Felix, Ana; Linneweber, Gerit A.; Perea, Daniel; Wayland, Matthew; Shafer, Orie T.; Miguel-Aliaga, Irene

    2012-01-01

    The role of the central neuropeptide pigment-dispersing factor (PDF) in circadian timekeeping in Drosophila is remarkably similar to that of vasoactive intestinal peptide (VIP) in mammals. Like VIP, PDF is expressed outside the circadian network by neurons innervating the gut, but the function and mode of action of this PDF have not been characterized. Here we investigate the visceral roles of PDF by adapting cellular and physiological methods to the study of visceral responses to PDF signaling in wild-type and mutant genetic backgrounds. We find that intestinal PDF acts at a distance on the renal system, where it regulates ureter contractions. We show that PdfR, PDF's established receptor, is expressed by the muscles of the excretory system, and present evidence that PdfR-induced cAMP increases underlie the myotropic effects of PDF. These findings extend the similarities between PDF and VIP beyond their shared central role as circadian regulators, and uncover an unexpected endocrine mode of myotropic action for an intestinal neuropeptide on the renal system. PMID:22778427

  14. Calibration and data collection protocols for reliable lattice parameter values in electron pair distribution function (ePDF) studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abeykoon, A. M. Milinda; Hu, Hefei; Wu, Lijun

    2015-02-01

    We explore and describe different protocols for calibrating electron pair distribution function (ePDF) measurements for quantitative studies on nano-materials. We find the most accurate approach to determine the camera-length is to use a standard calibration sample of Au nanoparticles from National Institute of Standards and Technology. Different protocols for data collection are also explored, as are possible operational errors, to find the best approaches for accurate data collection for quantitative ePDF studies.

  15. Electrical silencing of PDF neurons advances the phase of non-PDF clock neurons in Drosophila.

    PubMed

    Wu, Ying; Cao, Guan; Nitabach, Michael N

    2008-04-01

    Drosophila clock neurons exhibit self-sustaining cellular oscillations that rely in part on rhythmic transcriptional feedback loops. We have previously determined that electrical silencing of the pigment dispersing factor (PDF)-expressing lateral-ventral (LN(V)) pacemaker subset of fly clock neurons via expression of an inward-rectifier K(+) channel (Kir2.1) severely disrupts free-running rhythms of locomotor activity-most flies are arrhythmic and those that are not exhibit weak short-period rhythms-and abolishes LN(V) molecular oscillation in constant darkness. PDF is known to be an important LN(V) output signal. Here we examine the effects of electrical silencing of the LN(V) pacemakers on molecular rhythms in other, nonsilenced, subsets of clock neurons. In contrast to previously described cell-autonomous abolition of free-running molecular rhythms, we find that electrical silencing of the LN(V) pacemakers via Kir2.1 expression does not impair molecular rhythms in LN(D), DN1, and DN2 subsets of clock neurons. However, free-running molecular rhythms in these non-LN(V) clock neurons occur with advanced phase. Electrical silencing of LN(V)s phenocopies PDF null mutation (pdf (01) ) at both behavioral and molecular levels except for the complete abolition of free-running cellular oscillation in the LN(V)s themselves. LN(V) electrically silenced or pdf 01 flies exhibit weak free-running behavioral rhythms with short period, and the molecular oscillation in non-LN(V) neurons phase advances in constant darkness. That LN( V) electrical silencing leads to the same behavioral and non-LN( V) molecular phenotypes as pdf 01 suggests that persistence of LN(V) molecular oscillation in pdf 01 flies has no functional effect, either on behavioral rhythms or on non-LN(V) molecular rhythms. We thus conclude that functionally relevant signals from LN(V)s to non-LN(V) clock neurons and other downstream targets rely both on PDF signaling and LN(V) electrical activity, and that LN( V)s do not ordinarily send functionally relevant signals via PDF-independent mechanisms.

  16. Planar isotropy of passive scalar turbulent mixing with a mean perpendicular gradient.

    PubMed

    Danaila, L; Dusek, J; Le Gal, P; Anselmet, F; Brun, C; Pumir, A

    1999-08-01

    A recently proposed evolution equation [Vaienti et al., Physica D 85, 405 (1994)] for the probability density functions (PDF's) of turbulent passive scalar increments obtained under the assumptions of fully three-dimensional homogeneity and isotropy is submitted to validation using direct numerical simulation (DNS) results of the mixing of a passive scalar with a nonzero mean gradient by a homogeneous and isotropic turbulent velocity field. It is shown that this approach leads to a quantitatively correct balance between the different terms of the equation, in a plane perpendicular to the mean gradient, at small scales and at large Péclet number. A weaker assumption of homogeneity and isotropy restricted to the plane normal to the mean gradient is then considered to derive an equation describing the evolution of the PDF's as a function of the spatial scale and the scalar increments. A very good agreement between the theory and the DNS data is obtained at all scales. As a particular case of the theory, we derive a generalized form for the well-known Yaglom equation (the isotropic relation between the second-order moments for temperature increments and the third-order velocity-temperature mixed moments). This approach allows us to determine quantitatively how the integral scale properties influence the properties of mixing throughout the whole range of scales. In the simple configuration considered here, the PDF's of the scalar increments perpendicular to the mean gradient can be theoretically described once the sources of inhomogeneity and anisotropy at large scales are correctly taken into account.

  17. Combining density functional theory (DFT) and pair distribution function (PDF) analysis to solve the structure of metastable materials: the case of metakaolin.

    PubMed

    White, Claire E; Provis, John L; Proffen, Thomas; Riley, Daniel P; van Deventer, Jannie S J

    2010-04-07

    Understanding the atomic structure of complex metastable (including glassy) materials is of great importance in research and industry, however, such materials resist solution by most standard techniques. Here, a novel technique combining thermodynamics and local structure is presented to solve the structure of the metastable aluminosilicate material metakaolin (calcined kaolinite) without the use of chemical constraints. The structure is elucidated by iterating between least-squares real-space refinement using neutron pair distribution function data, and geometry optimisation using density functional modelling. The resulting structural representation is both energetically feasible and in excellent agreement with experimental data. This accurate structural representation of metakaolin provides new insight into the local environment of the aluminium atoms, with evidence of the existence of tri-coordinated aluminium. By the availability of this detailed chemically feasible atomic description, without the need to artificially impose constraints during the refinement process, there exists the opportunity to tailor chemical and mechanical processes involving metakaolin and other complex metastable materials at the atomic level to obtain optimal performance at the macro-scale.

  18. Impacts of icodextrin on integrin-mediated wound healing of peritoneal mesothelial cells.

    PubMed

    Matsumoto, Mika; Tamura, Masahito; Miyamoto, Tetsu; Furuno, Yumi; Kabashima, Narutoshi; Serino, Ryota; Shibata, Tatsuya; Kanegae, Kaori; Takeuchi, Masaaki; Abe, Haruhiko; Okazaki, Masahiro; Otsuji, Yutaka

    2012-06-14

    Exposure to glucose and its metabolites in peritoneal dialysis fluid (PDF) results in structural alterations of the peritoneal membrane. Icodextrin-containing PDF eliminates glucose and reduces deterioration of peritoneal membrane function, but direct effects of icodextrin molecules on peritoneal mesothelial cells have yet to be elucidated. We compared the impacts of icodextrin itself with those of glucose under PDF-free conditions on wound healing processes of injured mesothelial cell monolayers, focusing on integrin-mediated cell adhesion mechanisms. Regeneration processes of the peritoneal mesothelial cell monolayer were investigated employing an in vitro wound healing assay of cultured rat peritoneal mesothelial cells treated with icodextrin powder- or glucose-dissolved culture medium without PDF, as well as icodextrin- or glucose-containing PDF. The effects of icodextrin on integrin-mediated cell adhesions were examined by immunocytochemistry and Western blotting against focal adhesion kinase (FAK). Cell migration over fibronectin was inhibited in conventional glucose-containing PDF, while icodextrin-containing PDF exerted no significant inhibitory effects. Culture medium containing 1.5% glucose without PDF also inhibited wound healing of mesothelial cells, while 7.5% icodextrin-dissolved culture medium without PDF had no inhibitory effects. Glucose suppressed cell motility by inhibiting tyrosine phosphorylation of FAK, formation of focal adhesions, and cell spreading, while icodextrin had no effects on any of these mesothelial cell functions. Our results demonstrate icodextrin to have no adverse effects on wound healing processes of peritoneal mesothelial cells. Preservation of integrin-mediated cell adhesion might be one of the molecular mechanisms accounting for the superior biocompatibility of icodextrin-containing PDF. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. GW182 controls Drosophila circadian behavior and PDF-receptor signaling.

    PubMed

    Zhang, Yong; Emery, Patrick

    2013-04-10

    The neuropeptide PDF is crucial for Drosophila circadian behavior: it keeps circadian neurons synchronized. Here, we identify GW182 as a key regulator of PDF signaling. Indeed, GW182 downregulation results in phenotypes similar to those of Pdf and Pdf-receptor (Pdfr) mutants. gw182 genetically interacts with Pdfr and cAMP signaling, which is essential for PDFR function. GW182 mediates miRNA-dependent gene silencing through its interaction with AGO1. Consistently, GW182's AGO1 interaction domain is required for GW182's circadian function. Moreover, our results indicate that GW182 modulates PDFR signaling by silencing the expression of the cAMP phosphodiesterase DUNCE. Importantly, this repression is under photic control, and GW182 activity level--which is limiting in circadian neurons--influences the responses of the circadian neural network to light. We propose that GW182's gene silencing activity functions as a rheostat for PDFR signaling and thus profoundly impacts the circadian neural network and its response to environmental inputs. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Modeling of the reactant conversion rate in a turbulent shear flow

    NASA Technical Reports Server (NTRS)

    Frankel, S. H.; Madnia, C. K.; Givi, P.

    1992-01-01

    Results are presented of direct numerical simulations (DNS) of spatially developing shear flows under the influence of infinitely fast chemical reactions of the type A + B yields Products. The simulation results are used to construct the compositional structure of the scalar field in a statistical manner. The results of this statistical analysis indicate that the use of a Beta density for the probability density function (PDF) of an appropriate Shvab-Zeldovich mixture fraction provides a very good estimate of the limiting bounds of the reactant conversion rate within the shear layer. This provides a strong justification for the implementation of this density in practical modeling of non-homogeneous turbulent reacting flows. However, the validity of the model cannot be generalized for predictions of higher order statistical quantities. A closed form analytical expression is presented for predicting the maximum rate of reactant conversion in non-homogeneous reacting turbulence.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaltonen, T.

    We search for resonant production of tt pairs in 4.8 fb -1 integrated luminosity of pp collision data at √s = 1.96 TeV in the lepton+jets decay channel, where one top quark decays leptonically and the other hadronically. A matrix element reconstruction technique is used; for each event a probability density function (pdf) of the tt candidate invariant mass is sampled. These pdfs are used to construct a likelihood function, whereby the cross section for resonant tt production is estimated, given a hypothetical resonance mass and width. The data indicate no evidence of resonant production of tt pairs. A benchmarkmore » model of leptophobic Z' → tt is excluded with m Z' < 900 GeV at 95% confidence level.« less

  2. Stochastic modeling of turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Fox, R. O.; Hill, J. C.; Gao, F.; Moser, R. D.; Rogers, M. M.

    1992-01-01

    Direct numerical simulations of a single-step irreversible chemical reaction with non-premixed reactants in forced isotropic turbulence at R(sub lambda) = 63, Da = 4.0, and Sc = 0.7 were made using 128 Fourier modes to obtain joint probability density functions (pdfs) and other statistical information to parameterize and test a Fokker-Planck turbulent mixing model. Preliminary results indicate that the modeled gradient stretching term for an inert scalar is independent of the initial conditions of the scalar field. The conditional pdf of scalar gradient magnitudes is found to be a function of the scalar until the reaction is largely completed. Alignment of concentration gradients with local strain rate and other features of the flow were also investigated.

  3. The photon content of the proton

    NASA Astrophysics Data System (ADS)

    Manohar, Aneesh V.; Nason, Paolo; Salam, Gavin P.; Zanderighi, Giulia

    2017-12-01

    The photon PDF of the proton is needed for precision comparisons of LHC cross sections with theoretical predictions. In a recent paper, we showed how the photon PDF could be determined in terms of the electromagnetic proton structure functions F 2 and F L measured in electron-proton scattering experiments, and gave an explicit formula for the PDF including all terms up to next-to-leading order. In this paper we give details of the derivation. We obtain the photon PDF using the factorisation theorem and applying it to suitable BSM hard scattering processes. We also obtain the same PDF in a process-independent manner using the usual definition of PDFs in terms of light-cone Fourier transforms of products of operators. We show how our method gives an exact representation for the photon PDF in terms of F 2 and F L , valid to all orders in QED and QCD, and including all non-perturbative corrections. This representation is then used to give an explicit formula for the photon PDF to one order higher than our previous result. We also generalise our results to obtain formulæ for the polarised photon PDF, as well as the photon TMDPDF. Using our formula, we derive the P γ i subset of DGLAP splitting functions to order αα s and α 2, which agree with known results. We give a detailed explanation of the approach that we follow to determine a photon PDF and its uncertainty within the above framework.

  4. The GABAA Receptor RDL Acts in Peptidergic PDF Neurons to Promote Sleep in Drosophila

    PubMed Central

    Chung, Brian Y.; Kilman, Valerie L.; Keath, J. Russel; Pitman, Jena L.; Allada, Ravi

    2011-01-01

    SUMMARY Sleep is regulated by a circadian clock that largely times sleep and wake to occur at specific times of day and a sleep homeostat that drives sleep as a function of duration of prior wakefulness[1]. To better understand the role of the circadian clock in sleep regulation, we have been using the fruit fly Drosophila melanogaster[2]. Fruit flies display all of the core behavioral features of sleep including relative immobility, elevated arousal thresholds and homeostatic regulation[2, 3]. We assessed sleep-wake modulation by a core set of 20 circadian pacemaker neurons that express the neuropeptide PDF. We find that PDF neuron ablation, loss of pdf or its receptor pdfr results in increased sleep during the late night in light:dark (LD) conditions and more prominent increases on the first subjective day of constant darkness (DD). Flies deploy similar genetic and neurotransmitter pathways to regulate sleep as their mammalian counterparts, including GABA[4]. We find that RNAi-mediated knockdown of the GABAA receptor gene, Resistant to dieldrin (Rdl), in PDF neurons, reduced sleep consistent with a role for GABA in inhibiting PDF neuron function. Patch clamp electrophysiology reveals GABA-activated picrotoxin-sensitive chloride currents on PDF+ neurons. In addition, RDL is detectable most strongly on the large subset of PDF+ pacemaker neurons. These results suggest that GABAergic inhibition of arousal promoting PDF neurons is an important mode of sleep-wake regulation in vivo. PMID:19230663

  5. Looping probabilities of elastic chains: a path integral approach.

    PubMed

    Cotta-Ramusino, Ludovica; Maddocks, John H

    2010-11-01

    We consider an elastic chain at thermodynamic equilibrium with a heat bath, and derive an approximation to the probability density function, or pdf, governing the relative location and orientation of the two ends of the chain. Our motivation is to exploit continuum mechanics models for the computation of DNA looping probabilities, but here we focus on explaining the novel analytical aspects in the derivation of our approximation formula. Accordingly, and for simplicity, the current presentation is limited to the illustrative case of planar configurations. A path integral formalism is adopted, and, in the standard way, the first approximation to the looping pdf is obtained from a minimal energy configuration satisfying prescribed end conditions. Then we compute an additional factor in the pdf which encompasses the contributions of quadratic fluctuations about the minimum energy configuration along with a simultaneous evaluation of the partition function. The original aspects of our analysis are twofold. First, the quadratic Lagrangian describing the fluctuations has cross-terms that are linear in first derivatives. This, seemingly small, deviation from the structure of standard path integral examples complicates the necessary analysis significantly. Nevertheless, after a nonlinear change of variable of Riccati type, we show that the correction factor to the pdf can still be evaluated in terms of the solution to an initial value problem for the linear system of Jacobi ordinary differential equations associated with the second variation. The second novel aspect of our analysis is that we show that the Hamiltonian form of these linear Jacobi equations still provides the appropriate correction term in the inextensible, unshearable limit that is commonly adopted in polymer physics models of, e.g. DNA. Prior analyses of the inextensible case have had to introduce nonlinear and nonlocal integral constraints to express conditions on the relative displacement of the end points. Our approximation formula for the looping pdf is of quite general applicability as, in contrast to most prior approaches, no assumption is made of either uniformity of the elastic chain, nor of a straight intrinsic shape. If the chain is uniform the Jacobi system evaluated at certain minimum energy configurations has constant coefficients. In such cases our approximate pdf can be evaluated in an entirely explicit, closed form. We illustrate our analysis with a planar example of this type and compute an approximate probability of cyclization, i.e., of forming a closed loop, from a uniform elastic chain whose intrinsic shape is an open circular arc.

  6. A comprehensive model to determine the effects of temperature and species fluctuations on reactions in turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Antaki, P. J.

    1981-01-01

    The joint probability distribution function (pdf), which is a modification of the bivariate Gaussian pdf, is discussed and results are presented for a global reaction model using the joint pdf. An alternative joint pdf is discussed. A criterion which permits the selection of temperature pdf's in different regions of turbulent, reacting flow fields is developed. Two principal approaches to the determination of reaction rates in computer programs containing detailed chemical kinetics are outlined. These models represent a practical solution to the modeling of species reaction rates in turbulent, reacting flows.

  7. Regional And Seasonal Aspects Of Within-The-Hour Tec Statistics

    NASA Astrophysics Data System (ADS)

    Koroglu, Ozan; Arikan, Feza; Koroglu, Meltem

    2015-04-01

    Ionosphere is one of the atmosphere layers which has a plasma structure. Several mechanisms originating from both space and earth itself governs this plasma layer such as solar radiation and geomagnetic effects. Ionosphere plays important role for HF and satellite communication, and space based positioning systems. Therefore, the determination of statistical behavior of ionosphere has utmost importance. The variability of the ionosphere has complex spatio-temporal characteristics, which depends on solar, geomagnetic, gravitational and seismic activities. Total Electron Content (TEC) is one of the major observables for investigating and determining this variability. In this study, spatio-temporal within-the-hour statistical behavior of TEC is determined for Turkey, which is located in mid-latitude, using the TEC estimates from Turkish National Permanent GPS Network (TNPGN)-Active between the years 2009 and 2012. TEC estimates are obtained as IONOLAB-TEC which is developed by IONOLAB group (www.ionolab.org) from Hacettepe University. IONOLAB-TEC for each station in TNPGN-Active is organized in a database and grouped with respect to years, ionospheric seasons, hours and regions 2 degree by 3 degree, in latitude and longitude, respectively. The data sets are used to calculate within-the-hour parametric Probability Density Functions (PDF). For every year, every region and every hour, a representative PDF is determined. It is observed that TEC values have a strong hourly, seasonal and positional dependence on east-west direction, and the growing trend shifts according to sunrise and sunset times. It is observed that the data are distributed predominantly as Lognormal and Weibull. The averages and standard deviations of the chosen distributions follow the trends in 24 hour diurnal and 11 year solar cycle periods. The regional and seasonal behavior of PDFs are investigated using a representative GPS station within each region. Within-the-hour PDF estimates are grouped into ionospheric seasons as Winter, Summer, March equinox and September equinox. In winter and summer seasons, Lognormal distribution is observed. During equinox seasons, Weibull distribution is observed more frequently. Furthermore, all hourly TEC values in same region are combined in order to improve the reliability and accuracy of the probability density function estimates. It is observed that as being in mid-latitude region, the ionosphere over Turkey has robust characteristics that are distributed as Lognormal and Weibull. Statistical observations on PDF estimates of TEC of the ionosphere over Turkey will contribute to developing a regional and seasonal random field model, which will further contribute to HF channel characterization. This study is supported by a joint grant of TUBITAK 112E568 and RFBR 13-02-91370-CT_a.

  8. Detecting phase separation of freeze-dried binary amorphous systems using pair-wise distribution function and multivariate data analysis.

    PubMed

    Chieng, Norman; Trnka, Hjalte; Boetker, Johan; Pikal, Michael; Rantanen, Jukka; Grohganz, Holger

    2013-09-15

    The purpose of this study is to investigate the use of multivariate data analysis for powder X-ray diffraction-pair-wise distribution function (PXRD-PDF) data to detect phase separation in freeze-dried binary amorphous systems. Polymer-polymer and polymer-sugar binary systems at various ratios were freeze-dried. All samples were analyzed by PXRD, transformed to PDF and analyzed by principal component analysis (PCA). These results were validated by differential scanning calorimetry (DSC) through characterization of glass transition of the maximally freeze-concentrate solute (Tg'). Analysis of PXRD-PDF data using PCA provides a more clear 'miscible' or 'phase separated' interpretation through the distribution pattern of samples on a score plot presentation compared to residual plot method. In a phase separated system, samples were found to be evenly distributed around the theoretical PDF profile. For systems that were miscible, a clear deviation of samples away from the theoretical PDF profile was observed. Moreover, PCA analysis allows simultaneous analysis of replicate samples. Comparatively, the phase behavior analysis from PXRD-PDF-PCA method was in agreement with the DSC results. Overall, the combined PXRD-PDF-PCA approach improves the clarity of the PXRD-PDF results and can be used as an alternative explorative data analytical tool in detecting phase separation in freeze-dried binary amorphous systems. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Anatomical characterization of PDF-tri neurons and peptidergic neurons associated with eclosion behavior in Drosophila.

    PubMed

    Selcho, Mareike; Mühlbauer, Barbara; Hensgen, Ronja; Shiga, Sakiko; Wegener, Christian; Yasuyama, Kouji

    2018-06-01

    The peptidergic Pigment-dispersing factor (PDF)-Tri neurons are a group of non-clock neurons that appear transiently around the time of adult ecdysis (=eclosion) in the fruit fly Drosophila melanogaster. This specific developmental pattern points to a function of these neurons in eclosion or other processes that are active around pupal-adult transition. As a first step to understand the role of these neurons, we here characterize the anatomy of the PDF-Tri neurons. In addition, we describe a further set of peptidergic neurons that have been associated with eclosion behavior, eclosion hormone (EH), and crustacean cardioactive peptide (CCAP) neurons, to single cell level in the pharate adult brain. PDF-Tri neurons as well as CCAP neurons co-express a classical transmitter indicated by the occurrence of small clear vesicles in addition to dense-core vesicles containing the peptides. In the tritocerebrum, gnathal ganglion and the superior protocerebrum PDF-Tri neurites contain peptidergic varicosities and both pre- and postsynaptic sites, suggesting that the PDF-Tri neurons represent modulatory rather than pure interneurons that connect the subesophageal zone with the superior protocerebrum. The extensive overlap of PDF-Tri arborizations with neurites of CCAP- and EH-expressing neurons in distinct brain regions provides anatomical evidence for a possible function of the PDF-Tri neurons in eclosion behavior. © 2018 Wiley Periodicals, Inc.

  10. Enhanced hydrophobicity and volatility of submicron aerosols under severe emission control conditions in Beijing

    NASA Astrophysics Data System (ADS)

    Wang, Yuying; Zhang, Fang; Li, Zhanqing; Tan, Haobo; Xu, Hanbing; Ren, Jingye; Zhao, Jian; Du, Wei; Sun, Yele

    2017-04-01

    A series of strict emission control measures was implemented in Beijing and the surrounding seven provinces to ensure good air quality during the 2015 China Victory Day parade, rendering a unique opportunity to investigate the anthropogenic impact of aerosol properties. Submicron aerosol hygroscopicity and volatility were measured during and after the control period using a hygroscopic and volatile tandem differential mobility analyzer (H/V-TDMA) system. Three periods, namely the control clean period (Clean1), the non-control clean period (Clean2), and the non-control pollution period (Pollution), were selected to study the effect of the emission control measures on aerosol hygroscopicity and volatility. Aerosol particles became more hydrophobic and volatile due to the emission control measures. The hygroscopicity parameter (κ) of 40-200 nm particles decreased by 32.0-8.5 % during the Clean1 period relative to the Clean2 period, while the volatile shrink factor (SF) of 40-300 nm particles decreased by 7.5-10.5 %. The emission controls also changed the diurnal variation patterns of both the probability density function of κ (κ-PDF) and the probability density function of SF (SF-PDF). During Clean1 the κ-PDF showed one nearly hydrophobic (NH) mode for particles in the nucleation mode, which was likely due to the dramatic reduction in industrial emissions of inorganic trace gases. Compared to the Pollution period, particles observed during the Clean1 and Clean2 periods exhibited a more significant nonvolatile (NV) mode throughout the day, suggesting a more externally mixed state particularly for the 150 nm particles. Aerosol hygroscopicities increased as particle sizes increased, with the greatest increases seen during the Pollution period. Accordingly, the aerosol volatility became weaker (i.e., SF increased) as particle sizes increased during the Clean1 and Clean2 periods, but no apparent trend was observed during the Pollution period. Based on a correlation analysis of the number fractions of NH and NV particles, we found that a higher number fraction of hydrophobic and volatile particles during the emission control period.

  11. Enhanced hydrophobicity and volatility of submicron aerosols under severe emission control conditions in Beijing

    NASA Astrophysics Data System (ADS)

    Wang, Yuying; Zhang, Fang; Li, Zhanqing

    2017-04-01

    A series of strict emission control measures were implemented in Beijing and the surrounding seven provinces to ensure good air quality during the 2015 China Victory Day parade, rendering a unique opportunity to investigate anthropogenic impact of aerosol properties. Submicron aerosol hygroscopicity and volatility were measured during and after the control period using a hygroscopic and volatile tandem differential mobility analyzer (H/V-TDMA) system. Three periods, namely, the control clean period (Clean1), the non-control clean period (Clean2), and the non-control pollution period (Pollution), were selected to study the effect of the emission control measures on aerosol hygroscopicity and volatility. Aerosol particles became more hydrophobic and volatile due to the emission control measures. The hygroscopicity parameter (κ) of 40-200 nm particles decreased by 32.0%-8.5% during the Clean1 period relative to the Clean2 period, while the volatile shrink factor (SF) of 40-300 nm particles decreased by 7.5%-10.5%. The emission controls also changed the diurnal variation patterns of both the probability density function of κ (κ-PDF) and the probability density function of SF (SF-PDF). During Clean1 the κ-PDF showed one nearly-hydrophobic (NH) mode for particles in the nucleation mode, which was likely due to the dramatic reduction in industrial emissions of inorganic trace gases. Compared to the Pollution period, particles observed during the Clean1 and Clean2 periods exhibited a more significant non-volatile (NV) mode throughout the day, suggesting a more externally-mixed state particularly for the 150 nm particles. Aerosol hygroscopicities increased as particle sizes increased, with the greatest increases seen during the Pollution period. Accordingly, the aerosol volatility became weaker (i.e., SF increased) during the Clean1 and Clean2 periods, but no apparent trend was observed during the Pollution period. Based on a correlation analysis of the number fractions of NH and NV particles, we found that a higher number fraction of hydrophobic and volatile particles during the emission control period.

  12. Modeling the Bergeron-Findeisen Process Using PDF Methods With an Explicit Representation of Mixing

    NASA Astrophysics Data System (ADS)

    Jeffery, C.; Reisner, J.

    2005-12-01

    Currently, the accurate prediction of cloud droplet and ice crystal number concentration in cloud resolving, numerical weather prediction and climate models is a formidable challenge. The Bergeron-Findeisen process in which ice crystals grow by vapor deposition at the expense of super-cooled droplets is expected to be inhomogeneous in nature--some droplets will evaporate completely in centimeter-scale filaments of sub-saturated air during turbulent mixing while others remain unchanged [Baker et al., QJRMS, 1980]--and is unresolved at even cloud-resolving scales. Despite the large body of observational evidence in support of the inhomogeneous mixing process affecting cloud droplet number [most recently, Brenguier et al., JAS, 2000], it is poorly understood and has yet to be parameterized and incorporated into a numerical model. In this talk, we investigate the Bergeron-Findeisen process using a new approach based on simulations of the probability density function (PDF) of relative humidity during turbulent mixing. PDF methods offer a key advantage over Eulerian (spatial) models of cloud mixing and evaporation: the low probability (cm-scale) filaments of entrained air are explicitly resolved (in probability space) during the mixing event even though their spatial shape, size and location remain unknown. Our PDF approach reveals the following features of the inhomogeneous mixing process during the isobaric turbulent mixing of two parcels containing super-cooled water and ice, respectively: (1) The scavenging of super-cooled droplets is inhomogeneous in nature; some droplets evaporate completely at early times while others remain unchanged. (2) The degree of total droplet evaporation during the initial mixing period depends linearly on the mixing fractions of the two parcels and logarithmically on Damköhler number (Da)---the ratio of turbulent to evaporative time-scales. (3) Our simulations predict that the PDF of Lagrangian (time-integrated) subsaturation (S) goes as S-1 at high Da. This behavior results from a Gaussian mixing closure and requires observational validation.

  13. Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method.

    PubMed

    Li, Haisen S; Chetty, Indrin J; Solberg, Timothy D

    2008-05-01

    The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method ("average-based convolution"), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (> 30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible.

  14. Evolution of the concentration PDF in random environments modeled by global random walk

    NASA Astrophysics Data System (ADS)

    Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter

    2013-04-01

    The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and speeds up the computation by orders of magnitude. The approach is illustrated for the transport of passive scalars in heterogeneous aquifers, with hydraulic conductivity modeled as a random field.

  15. An analysis of intrinsic variations of low-frequency shear wave speed in a stochastic tissue model: the first application for staging liver fibrosis

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Wang, Min; Jiang, Jingfeng

    2017-02-01

    Shear wave elastography is increasingly being used to non-invasively stage liver fibrosis by measuring shear wave speed (SWS). This study quantitatively investigates intrinsic variations among SWS measurements obtained from heterogeneous media such as fibrotic livers. More specifically, it aims to demonstrate that intrinsic variations in SWS measurements, in general, follow a non-Gaussian distribution and are related to the heterogeneous nature of the medium being measured. Using the principle of maximum entropy (ME), our primary objective is to derive a probability density function (PDF) of the SWS distribution in conjunction with a lossless stochastic tissue model. Our secondary objective is to evaluate the performance of the proposed PDF using Monte Carlo (MC)-simulated shear wave (SW) data against three other commonly used PDFs. Based on statistical evaluation criteria, initial results showed that the derived PDF fits better to MC-simulated SWS data than the other three PDFs. It was also found that SW fronts stabilized after a short (compared with the SW wavelength) travel distance in lossless media. Furthermore, in lossless media, the distance required to stabilize the SW propagation was not correlated to the SW wavelength at the low frequencies investigated (i.e. 50, 100 and 150 Hz). Examination of the MC simulation data suggests that elastic (shear) wave scattering became more pronounced when the volume fraction of hard inclusions increased from 10 to 30%. In conclusion, using the principle of ME, we theoretically demonstrated for the first time that SWS measurements in this model follow a non-Gaussian distribution. Preliminary data indicated that the proposed PDF can quantitatively represent intrinsic variations in SWS measurements simulated using a two-phase random medium model. The advantages of the proposed PDF are its physically meaningful parameters and solid theoretical basis.

  16. Margin selection to compensate for loss of target dose coverage due to target motion during external‐beam radiation therapy of the lung

    PubMed Central

    Osei, Ernest; Barnett, Rob

    2015-01-01

    The aim of this study is to provide guidelines for the selection of external‐beam radiation therapy target margins to compensate for target motion in the lung during treatment planning. A convolution model was employed to predict the effect of target motion on the delivered dose distribution. The accuracy of the model was confirmed with radiochromic film measurements in both static and dynamic phantom modes. 502 unique patient breathing traces were recorded and used to simulate the effect of target motion on a dose distribution. A 1D probability density function (PDF) representing the position of the target throughout the breathing cycle was generated from each breathing trace obtained during 4D CT. Changes in the target D95 (the minimum dose received by 95% of the treatment target) due to target motion were analyzed and shown to correlate with the standard deviation of the PDF. Furthermore, the amount of target D95 recovered per millimeter of increased field width was also shown to correlate with the standard deviation of the PDF. The sensitivity of changes in dose coverage with respect to target size was also determined. Margin selection recommendations that can be used to compensate for loss of target D95 were generated based on the simulation results. These results are discussed in the context of clinical plans. We conclude that, for PDF standard deviations less than 0.4 cm with target sizes greater than 5 cm, little or no additional margins are required. Targets which are smaller than 5 cm with PDF standard deviations larger than 0.4 cm are most susceptible to loss of coverage. The largest additional required margin in this study was determined to be 8 mm. PACS numbers: 87.53.Bn, 87.53.Kn, 87.55.D‐, 87.55.Gh

  17. The pdf approach to turbulent polydispersed two-phase flows

    NASA Astrophysics Data System (ADS)

    Minier, Jean-Pierre; Peirano, Eric

    2001-10-01

    The purpose of this paper is to develop a probabilistic approach to turbulent polydispersed two-phase flows. The two-phase flows considered are composed of a continuous phase, which is a turbulent fluid, and a dispersed phase, which represents an ensemble of discrete particles (solid particles, droplets or bubbles). Gathering the difficulties of turbulent flows and of particle motion, the challenge is to work out a general modelling approach that meets three requirements: to treat accurately the physically relevant phenomena, to provide enough information to address issues of complex physics (combustion, polydispersed particle flows, …) and to remain tractable for general non-homogeneous flows. The present probabilistic approach models the statistical dynamics of the system and consists in simulating the joint probability density function (pdf) of a number of fluid and discrete particle properties. A new point is that both the fluid and the particles are included in the pdf description. The derivation of the joint pdf model for the fluid and for the discrete particles is worked out in several steps. The mathematical properties of stochastic processes are first recalled. The various hierarchies of pdf descriptions are detailed and the physical principles that are used in the construction of the models are explained. The Lagrangian one-particle probabilistic description is developed first for the fluid alone, then for the discrete particles and finally for the joint fluid and particle turbulent systems. In the case of the probabilistic description for the fluid alone or for the discrete particles alone, numerical computations are presented and discussed to illustrate how the method works in practice and the kind of information that can be extracted from it. Comments on the current modelling state and propositions for future investigations which try to link the present work with other ideas in physics are made at the end of the paper.

  18. A New LES/PDF Method for Computational Modeling of Turbulent Reacting Flows

    NASA Astrophysics Data System (ADS)

    Turkeri, Hasret; Muradoglu, Metin; Pope, Stephen B.

    2013-11-01

    A new LES/PDF method is developed for computational modeling of turbulent reacting flows. The open source package, OpenFOAM, is adopted as the LES solver and combined with the particle-based Monte Carlo method to solve the LES/PDF model equations. The dynamic Smagorinsky model is employed to account for the subgrid-scale motions. The LES solver is first validated for the Sandia Flame D using a steady flamelet method in which the chemical compositions, density and temperature fields are parameterized by the mean mixture fraction and its variance. In this approach, the modeled transport equations for the mean mixture fraction and the square of the mixture fraction are solved and the variance is then computed from its definition. The results are found to be in a good agreement with the experimental data. Then the LES solver is combined with the particle-based Monte Carlo algorithm to form a complete solver for the LES/PDF model equations. The in situ adaptive tabulation (ISAT) algorithm is incorporated into the LES/PDF method for efficient implementation of detailed chemical kinetics. The LES/PDF method is also applied to the Sandia Flame D using the GRI-Mech 3.0 chemical mechanism and the results are compared with the experimental data and the earlier PDF simulations. The Scientific and Technical Research Council of Turkey (TUBITAK), Grant No. 111M067.

  19. From average to local structure: a Rietveld and an atomic pair distribution function (PDF) study of selenium clusters in zeolite-NdY.

    PubMed

    Abeykoon, A M Milinda; Donner, Wolfgang; Brunelli, Michela; Castro-Colin, Miguel; Jacobson, Allan J; Moss, Simon C

    2009-09-23

    The structure of Se particles in the approximately 13 A diameter alpha-cages of zeolite NdY has been determined by Rietveld refinement and pair distribution function (PDF) analysis of X-ray data. With the diffuse scattering subtracted an average structure comprised of an undistorted framework containing nanoclusters of 20 Se atoms is observed. The intracluster correlations and the cluster-framework correlations which give rise to diffuse scattering were modeled by using PDF analysis.

  20. Pigment-Dispersing Factor Signaling and Circadian Rhythms in Insect Locomotor Activity

    PubMed Central

    Shafer, Orie T.; Yao, Zepeng

    2014-01-01

    Though expressed in relatively few neurons in insect nervous systems, pigment-dispersing factor (PDF) plays many roles in the control of behavior and physiology. PDF’s role in circadian timekeeping is its best-understood function and the focus of this review. Here we recount the isolation and characterization of insect PDFs, review the evidence that PDF acts as a circadian clock output factor, and discuss emerging models of how PDF functions within circadian clock neuron network of Drosophila, the species in which this peptide’s circadian roles are best understood. PMID:25386391

  1. Signaling of Pigment-Dispersing Factor (PDF) in the Madeira Cockroach Rhyparobia maderae

    PubMed Central

    Funk, Nico W.; Giese, Maria; Baz, El-Sayed; Stengl, Monika

    2014-01-01

    The insect neuropeptide pigment-dispersing factor (PDF) is a functional ortholog of vasoactive intestinal polypeptide, the coupling factor of the mammalian circadian pacemaker. Despite of PDF's importance for synchronized circadian locomotor activity rhythms its signaling is not well understood. We studied PDF signaling in primary cell cultures of the accessory medulla, the circadian pacemaker of the Madeira cockroach. In Ca2+ imaging studies four types of PDF-responses were distinguished. In regularly bursting type 1 pacemakers PDF application resulted in dose-dependent long-lasting increases in Ca2+ baseline concentration and frequency of oscillating Ca2+ transients. Adenylyl cyclase antagonists prevented PDF-responses in type 1 cells, indicating that PDF signaled via elevation of intracellular cAMP levels. In contrast, in type 2 pacemakers PDF transiently raised intracellular Ca2+ levels even after blocking adenylyl cyclase activity. In patch clamp experiments the previously characterized types 1–4 could not be identified. Instead, PDF-responses were categorized according to ion channels affected. Application of PDF inhibited outward potassium or inward sodium currents, sometimes in the same neuron. In a comparison of Ca2+ imaging and patch clamp experiments we hypothesized that in type 1 cells PDF-dependent rises in cAMP concentrations block primarily outward K+ currents. Possibly, this PDF-dependent depolarization underlies PDF-dependent phase advances of pacemakers. Finally, we propose that PDF-dependent concomitant modulation of K+ and Na+ channels in coupled pacemakers causes ultradian membrane potential oscillations as prerequisite to efficient synchronization via resonance. PMID:25269074

  2. Decoupling the NLO-coupled QED⊗QCD, DGLAP evolution equations, using Laplace transform method

    NASA Astrophysics Data System (ADS)

    Mottaghizadeh, Marzieh; Eslami, Parvin; Taghavi-Shahri, Fatemeh

    2017-05-01

    We analytically solved the QED⊗QCD-coupled DGLAP evolution equations at leading order (LO) quantum electrodynamics (QED) and next-to-leading order (NLO) quantum chromodynamics (QCD) approximations, using the Laplace transform method and then computed the proton structure function in terms of the unpolarized parton distribution functions. Our analytical solutions for parton densities are in good agreement with those from CT14QED (1.2952 < Q2 < 1010) (Ref. 6) global parametrizations and APFEL (A PDF Evolution Library) (2 < Q2 < 108) (Ref. 4). We also compared the proton structure function, F2p(x,Q2), with the experimental data released by the ZEUS and H1 collaborations at HERA. There is a nice agreement between them in the range of low and high x and Q2.

  3. Relationship between sea ice freeboard and draft in the Arctic Basin, and implications for ice thickness monitoring

    NASA Astrophysics Data System (ADS)

    Wadhams, P.; Tucker, W. B.; Krabill, W. B.; Swift, R. N.; Comiso, J. C.; Davis, N. R.

    1992-12-01

    We have confirmed our earlier finding that the probability density function (pdf) of ice freeboard in the Arctic Ocean can be converted to a pdf of ice draft by applying a simple coordinate transformation based on the measured mean draft and mean elevation. This applies in each of six 50-km sections (north of Greenland) of joint airborne laser and submarine sonar profile obtained along nearly coincident tracks from the Arctic Basin north of Greenland and tested for this study. Detailed differences in the shape of the pdf can be explained on the basis of snow load and can, in principle, be compensated by the use of a more sophisticated freeboard-dependent transformation. The measured "density ratio" R (actually mean draft/mean elevation ratio) for each section was found to be consistent over all sections tested, despite differences in the ice regime, indicating that a single value of R might be used for measurements done in this season of the year. The mean value from all six sections is 7.89; on the assumption that all six values are drawn from the same population, the standard deviation is 0.55 for a single 50-km section, and thus 0.22 for 300 km of track. In attempting to infer ice draft from laser-measured freeboard, we would therefore expect an accuracy of about ±28 cm in 50 km of track (if mean draft is about 4 m) and about ±11 cm in 300 km of track; these accuracies are compatible with the resolution of predictions from numerical models. A simple model for the variability of R with season and with mean ice thickness gives results in reasonable agreement with observations. They show that although there is a large seasonal variability due to snow load, there is a stable period from November to April when the variability is chiefly dependent on the mean ice thickness alone. Thus, in principle, R can be mapped over the Arctic Ocean as a basis for interpreting survey data. Better field data are needed on the seasonal and spatial variability of three key quantities: area-averaged snow load, mean density of first-year and multiyear ice (including the effect of ridging with these two ice regimes), and density of near-surface water.

  4. Deep PDF parsing to extract features for detecting embedded malware.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munson, Miles Arthur; Cross, Jesse S.

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benignmore » and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout the rest of this report. The features are extracted using an instrumented PDF viewer, and are the inputs to a prediction model that scores the likelihood of a PDF file containing malware. The prediction model is constructed from a sample of labeled data by a machine learning algorithm (specifically, decision tree ensemble learning). Preliminary experiments show that the model is able to detect half of the PDF malware in the corpus with zero false alarms. We conclude the report with suggestions for extending this work to detect a greater variety of PDF malware.« less

  5. PDF turbulence modeling and DNS

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.

    1992-01-01

    The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in probability density function (pdf). A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models. The effect of Coriolis forces on compressible homogeneous turbulence is studied using direct numerical simulation (DNS). The numerical method used in this study is an eight order compact difference scheme. Contrary to the conclusions reached by previous DNS studies on incompressible isotropic turbulence, the present results show that the Coriolis force increases the dissipation rate of turbulent kinetic energy, and that anisotropy develops as the Coriolis force increases. The Taylor-Proudman theory does apply since the derivatives in the direction of the rotation axis vanishes rapidly. A closer analysis reveals that the dissipation rate of the incompressible component of the turbulent kinetic energy indeed decreases with a higher rotation rate, consistent with incompressible flow simulations (Bardina), while the dissipation rate of the compressible part increases; the net gain is positive. Inertial waves are observed in the simulation results.

  6. Intermittency of gravity wave momentum flux in the mesopause region observed with an all-sky airglow imager

    NASA Astrophysics Data System (ADS)

    Cao, Bing; Liu, Alan Z.

    2016-01-01

    The intermittency of gravity wave momentum flux (MF) near the OH airglow layer (˜87 km) in the mesopause region is investigated for the first time using observation of all-sky airglow imager over Maui, Hawaii (20.7°N, 156.3°W), and Cerro Pachón, Chile (30.3°S, 70.7°W). At both sites, the probability density function (pdf) of gravity wave MF shows two distinct distributions depending on the magnitude of the MF. For MF smaller (larger) than ˜16 m2 s-2 (0.091 mPa), the pdf follows a lognormal (power law) distribution. The intermittency represented by the Bernoulli proxy and the percentile ratio shows that gravity waves have higher intermittency at Maui than at Cerro Pachón, suggesting more intermittent background variation above Maui. It is found that most of the MF is contributed by waves that occur very infrequently. But waves that individually contribute little MF are also important because of their higher occurrence frequencies. The peak contribution is from waves with MF around ˜2.2 m2 s-2 at Cerro Pachón and ˜5.5 m2 s-2 at Maui. Seasonal variations of the pdf and intermittency imply that the background atmosphere has larger influence on the observed intermittency in the mesopause region.

  7. Estimating crustal heterogeneity from double-difference tomography

    USGS Publications Warehouse

    Got, J.-L.; Monteiller, V.; Virieux, J.; Okubo, P.

    2006-01-01

    Seismic velocity parameters in limited, but heterogeneous volumes can be inferred using a double-difference tomographic algorithm, but to obtain meaningful results accuracy must be maintained at every step of the computation. MONTEILLER et al. (2005) have devised a double-difference tomographic algorithm that takes full advantage of the accuracy of cross-spectral time-delays of large correlated event sets. This algorithm performs an accurate computation of theoretical travel-time delays in heterogeneous media and applies a suitable inversion scheme based on optimization theory. When applied to Kilauea Volcano, in Hawaii, the double-difference tomography approach shows significant and coherent changes to the velocity model in the well-resolved volumes beneath the Kilauea caldera and the upper east rift. In this paper, we first compare the results obtained using MONTEILLER et al.'s algorithm with those obtained using the classic travel-time tomographic approach. Then, we evaluated the effect of using data series of different accuracies, such as handpicked arrival-time differences ("picking differences"), on the results produced by double-difference tomographic algorithms. We show that picking differences have a non-Gaussian probability density function (pdf). Using a hyperbolic secant pdf instead of a Gaussian pdf allows improvement of the double-difference tomographic result when using picking difference data. We completed our study by investigating the use of spatially discontinuous time-delay data. ?? Birkha??user Verlag, Basel, 2006.

  8. Uncertainty propagation in orbital mechanics via tensor decomposition

    NASA Astrophysics Data System (ADS)

    Sun, Yifei; Kumar, Mrinal

    2016-03-01

    Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.

  9. Integrating K-means Clustering with Kernel Density Estimation for the Development of a Conditional Weather Generation Downscaling Model

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Ho, C.; Chang, L.

    2011-12-01

    In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the conditional probability density function (PDF) of precipitations approximated by the kernel density estimation are calculated respectively for each weather types. In the synthesis step, 100 patterns of synthesis data are generated. First, the weather type of the n-th day are determined by the results of K-means clustering. The associated transition matrix and PDF of the weather type were also determined for the usage of the next sub-step in the synthesis process. Second, the precipitation condition, dry or wet, can be synthesized basing on the transition matrix. If the synthesized condition is dry, the quantity of precipitation is zero; otherwise, the quantity should be further determined in the third sub-step. Third, the quantity of the synthesized precipitation is assigned as the random variable of the PDF defined above. The synthesis efficiency compares the gap of the monthly mean curves and monthly standard deviation curves between the historical precipitation data and the 100 patterns of synthesis data.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, Larry K.; Gustafson, William I.; Kassianov, Evgueni I.

    A new treatment for shallow clouds has been introduced into the Weather Research and Forecasting (WRF) model. The new scheme, called the cumulus potential (CuP) scheme, replaces the ad-hoc trigger function used in the Kain-Fritsch cumulus parameterization with a trigger function related to the distribution of temperature and humidity in the convective boundary layer via probability density functions (PDFs). An additional modification to the default version of WRF is the computation of a cumulus cloud fraction based on the time scales relevant for shallow cumuli. Results from three case studies over the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM)more » site in north central Oklahoma are presented. These days were selected because of the presence of shallow cumuli over the ARM site. The modified version of WRF does a much better job predicting the cloud fraction and the downwelling shortwave irradiance thancontrol simulations utilizing the default Kain-Fritsch scheme. The modified scheme includes a number of additional free parameters, including the number and size of bins used to define the PDF, the minimum frequency of a bin within the PDF before that bin is considered for shallow clouds to form, and the critical cumulative frequency of bins required to trigger deep convection. A series of tests were undertaken to evaluate the sensitivity of the simulations to these parameters. Overall, the scheme was found to be relatively insensitive to each of the parameters.« less

  11. Improved biocompatibility of bicarbonate/lactate-buffered PDF is not related to pH.

    PubMed

    Zareie, Mohammad; Keuning, Eelco D; ter Wee, Piet M; Schalkwijk, Casper G; Beelen, Robert H J; van den Born, Jacob

    2006-01-01

    Chronic exposure to conventional peritoneal dialysis fluid (PDF) is associated with functional and structural alterations of the peritoneal membrane. The bioincompatibility of conventional PDF can be due to hypertonicity, high glucose concentration, lactate buffering system, presence of glucose degradation products (GDPs) and/or acidic pH. Although various investigators have studied the sole effects of hyperosmolarity, high glucose, GDPs and lactate buffer in experimental PD, less attention has been paid to the chronic impact of low pH in vivo. Rats received daily 10 ml of either conventional lactate-buffered PDF (pH 5.2; n=7), a standard bicarbonate/lactate-buffered PDF with physiological pH (n=8), bicarbonate/lactate-buffered PDF with acidic pH (adjusted to pH 5.2 with 1 N hydrochloride, n=5), or bicarbonate/lactate buffer, without glucose, pH 7.4 (n=7). Fluids were instilled via peritoneal catheters connected to implanted subcutaneous mini vascular access ports for 8 weeks. Control animals with or without peritoneal catheters served as control groups (n=8/group). Various functional (2 h PET) and morphological/cellular parameters were analyzed. Compared with control groups and the buffer group, conventional lactate-buffered PDF induced a number of morphological/cellular changes, including angiogenesis and fibrosis in various peritoneal tissues (all parameters P<0.05), accompanied by increased glucose absorption and reduced ultrafiltration capacity. Daily exposure to standard or acidified bicarbonate/lactate-buffered PDF improved the performance of the peritoneal membrane, evidenced by reduced new vessel formation in omentum (P<0.02) and parietal peritoneum (P<0.008), reduced fibrosis (P<0.02) and improved ultrafiltration capacity. No significant differences were found between standard and acidified bicarbonate/lactate-buffered PDF. During PET, acidic PDF was neutralized within 15 to 20 min. The bicarbonate/lactate-buffered PDF, acidity per se did not contribute substantially to peritoneal worsening in our in vivo model for PD, which might be explained by the buffering capacity of the peritoneum.

  12. Are CO Observations of Interstellar Clouds Tracing the H2?

    NASA Astrophysics Data System (ADS)

    Federrath, Christoph; Glover, S. C. O.; Klessen, R. S.; Mac Low, M.

    2010-01-01

    Interstellar clouds are commonly observed through the emission of rotational transitions from carbon monoxide (CO). However, the abundance ratio of CO to molecular hydrogen (H2), which is the most abundant molecule in molecular clouds is only about 10-4. This raises the important question of whether the observed CO emission is actually tracing the bulk of the gas in these clouds, and whether it can be used to derive quantities like the total mass of the cloud, the gas density distribution function, the fractal dimension, and the velocity dispersion--size relation. To evaluate the usability and accuracy of CO as a tracer for H2 gas, we generate synthetic observations of hydrodynamical models that include a detailed chemical network to follow the formation and photo-dissociation of H2 and CO. These three-dimensional models of turbulent interstellar cloud formation self-consistently follow the coupled thermal, dynamical and chemical evolution of 32 species, with a particular focus on H2 and CO (Glover et al. 2009). We find that CO primarily traces the dense gas in the clouds, however, with a significant scatter due to turbulent mixing and self-shielding of H2 and CO. The H2 probability distribution function (PDF) is well-described by a log-normal distribution. In contrast, the CO column density PDF has a strongly non-Gaussian low-density wing, not at all consistent with a log-normal distribution. Centroid velocity statistics show that CO is more intermittent than H2, leading to an overestimate of the velocity scaling exponent in the velocity dispersion--size relation. With our systematic comparison of H2 and CO data from the numerical models, we hope to provide a statistical formula to correct for the bias of CO observations. CF acknowledges financial support from a Kade Fellowship of the American Museum of Natural History.

  13. A theory-based parameterization for heterogeneous ice nucleation and implications for the simulation of ice processes in atmospheric models

    NASA Astrophysics Data System (ADS)

    Savre, J.; Ekman, A. M. L.

    2015-05-01

    A new parameterization for heterogeneous ice nucleation constrained by laboratory data and based on classical nucleation theory is introduced. Key features of the parameterization include the following: a consistent and modular modeling framework for treating condensation/immersion and deposition freezing, the possibility to consider various potential ice nucleating particle types (e.g., dust, black carbon, and bacteria), and the possibility to account for an aerosol size distribution. The ice nucleating ability of each aerosol type is described using a contact angle (θ) probability density function (PDF). A new modeling strategy is described to allow the θ PDF to evolve in time so that the most efficient ice nuclei (associated with the lowest θ values) are progressively removed as they nucleate ice. A computationally efficient quasi Monte Carlo method is used to integrate the computed ice nucleation rates over both size and contact angle distributions. The parameterization is employed in a parcel model, forced by an ensemble of Lagrangian trajectories extracted from a three-dimensional simulation of a springtime low-level Arctic mixed-phase cloud, in order to evaluate the accuracy and convergence of the method using different settings. The same model setup is then employed to examine the importance of various parameters for the simulated ice production. Modeling the time evolution of the θ PDF is found to be particularly crucial; assuming a time-independent θ PDF significantly overestimates the ice nucleation rates. It is stressed that the capacity of black carbon (BC) to form ice in the condensation/immersion freezing mode is highly uncertain, in particular at temperatures warmer than -20°C. In its current version, the parameterization most likely overestimates ice initiation by BC.

  14. Numerical methods for the weakly compressible Generalized Langevin Model in Eulerian reference frame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azarnykh, Dmitrii, E-mail: d.azarnykh@tum.de; Litvinov, Sergey; Adams, Nikolaus A.

    2016-06-01

    A well established approach for the computation of turbulent flow without resolving all turbulent flow scales is to solve a filtered or averaged set of equations, and to model non-resolved scales by closures derived from transported probability density functions (PDF) for velocity fluctuations. Effective numerical methods for PDF transport employ the equivalence between the Fokker–Planck equation for the PDF and a Generalized Langevin Model (GLM), and compute the PDF by transporting a set of sampling particles by GLM (Pope (1985) [1]). The natural representation of GLM is a system of stochastic differential equations in a Lagrangian reference frame, typically solvedmore » by particle methods. A representation in a Eulerian reference frame, however, has the potential to significantly reduce computational effort and to allow for the seamless integration into a Eulerian-frame numerical flow solver. GLM in a Eulerian frame (GLMEF) formally corresponds to the nonlinear fluctuating hydrodynamic equations derived by Nakamura and Yoshimori (2009) [12]. Unlike the more common Landau–Lifshitz Navier–Stokes (LLNS) equations these equations are derived from the underdamped Langevin equation and are not based on a local equilibrium assumption. Similarly to LLNS equations the numerical solution of GLMEF requires special considerations. In this paper we investigate different numerical approaches to solving GLMEF with respect to the correct representation of stochastic properties of the solution. We find that a discretely conservative staggered finite-difference scheme, adapted from a scheme originally proposed for turbulent incompressible flow, in conjunction with a strongly stable (for non-stochastic PDE) Runge–Kutta method performs better for GLMEF than schemes adopted from those proposed previously for the LLNS. We show that equilibrium stochastic fluctuations are correctly reproduced.« less

  15. Representation of photon limited data in emission tomography using origin ensembles

    NASA Astrophysics Data System (ADS)

    Sitek, A.

    2008-06-01

    Representation and reconstruction of data obtained by emission tomography scanners are challenging due to high noise levels in the data. Typically, images obtained using tomographic measurements are represented using grids. In this work, we define images as sets of origins of events detected during tomographic measurements; we call these origin ensembles (OEs). A state in the ensemble is characterized by a vector of 3N parameters Y, where the parameters are the coordinates of origins of detected events in a three-dimensional space and N is the number of detected events. The 3N-dimensional probability density function (PDF) for that ensemble is derived, and we present an algorithm for OE image estimation from tomographic measurements. A displayable image (e.g. grid based image) is derived from the OE formulation by calculating ensemble expectations based on the PDF using the Markov chain Monte Carlo method. The approach was applied to computer-simulated 3D list-mode positron emission tomography data. The reconstruction errors for a 10 000 000 event acquisition for simulated ranged from 0.1 to 34.8%, depending on object size and sampling density. The method was also applied to experimental data and the results of the OE method were consistent with those obtained by a standard maximum-likelihood approach. The method is a new approach to representation and reconstruction of data obtained by photon-limited emission tomography measurements.

  16. Effects of soot absorption coefficient-Planck function correlation on radiative heat transfer in oxygen-enriched propane turbulent diffusion flame

    NASA Astrophysics Data System (ADS)

    Consalvi, J. L.; Nmira, F.

    2016-03-01

    The main objective of this article is to quantify the influence of the soot absorption coefficient-Planck function correlation on radiative loss and flame structure in an oxygen-enhanced propane turbulent diffusion flame. Calculations were run with and without accounting for this correlation by using a standard k-ε model and the steady laminar flamelet model (SLF) coupled to a joint Probability Density Function (PDF) of mixture fraction, enthalpy defect, scalar dissipation rate, and soot quantities. The PDF transport equation is solved by using a Stochastic Eulerian Field (SEF) method. The modeling of soot production is carried out by using a flamelet-based semi-empirical acetylene/benzene soot model. Radiative heat transfer is modeled by using a wide band correlated-k model and turbulent radiation interactions (TRI) are accounted for by using the Optically-Thin Fluctuation Approximation (OTFA). Predicted soot volume fraction, radiant wall heat flux distribution and radiant fraction are in good agreement with the available experimental data. Model results show that soot absorption coefficient and Planck function are negatively correlated in the region of intense soot emission. Neglecting this correlation is found to increase significantly the radiative loss leading to a substantial impact on flame structure in terms of mean and rms values of temperature. In addition mean and rms values of soot volume fraction are found to be less sensitive to the correlation than temperature since soot formation occurs mainly in a region where its influence is low.

  17. Targeted Single-Site MOF Node Modification: Trivalent Metal Loading via Atomic Layer Deposition

    DOE PAGES

    Kim, In Soo; Borycz, Joshua; Platero-Prats, Ana E.; ...

    2015-07-02

    Postsynthetic functionalization of metal organic frameworks (MOFs) enables the controlled, high-density incorporation of new atoms on a crystallographically precise framework. Leveraging the broad palette of known atomic layer deposition (ALD) chemistries, ALD in MOFs (AIM) is one such targeted approach to construct diverse, highly functional, few-atom clusters. In this paper, we demonstrate the saturating reaction of trimethylindium (InMe 3) with the node hydroxyls and ligated water of NU-1000, which takes place without significant loss of MOF crystallinity or internal surface area. We computationally identify the elementary steps by which trimethylated trivalent metal compounds (ALD precursors) react with this Zr-based MOFmore » node to generate a uniform and well characterized new surface layer on the node itself, and we predict a final structure that is fully consistent with experimental X-ray pair distribution function (PDF) analysis. Finally, we further demonstrate tunable metal loading through controlled number density of the reactive handles (–OH and –OH 2) achieved through node dehydration at elevated temperatures.« less

  18. Targeted Single-Site MOF Node Modification: Trivalent Metal Loading via Atomic Layer Deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, In Soo; Borycz, Joshua; Platero-Prats, Ana E.

    Postsynthetic functionalization of metal organic frameworks (MOFs) enables the controlled, high-density incorporation of new atoms on a crystallographically precise framework. Leveraging the broad palette of known atomic layer deposition (ALD) chemistries, ALD in MOFs (AIM) is one such targeted approach to construct diverse, highly functional, few-atom clusters. In this paper, we demonstrate the saturating reaction of trimethylindium (InMe 3) with the node hydroxyls and ligated water of NU-1000, which takes place without significant loss of MOF crystallinity or internal surface area. We computationally identify the elementary steps by which trimethylated trivalent metal compounds (ALD precursors) react with this Zr-based MOFmore » node to generate a uniform and well characterized new surface layer on the node itself, and we predict a final structure that is fully consistent with experimental X-ray pair distribution function (PDF) analysis. Finally, we further demonstrate tunable metal loading through controlled number density of the reactive handles (–OH and –OH 2) achieved through node dehydration at elevated temperatures.« less

  19. METAPHOR: Probability density estimation for machine learning based photometric redshifts

    NASA Astrophysics Data System (ADS)

    Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-06-01

    We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

  20. Signaling of pigment-dispersing factor (PDF) in the Madeira cockroach Rhyparobia maderae.

    PubMed

    Wei, Hongying; Yasar, Hanzey; Funk, Nico W; Giese, Maria; Baz, El-Sayed; Stengl, Monika

    2014-01-01

    The insect neuropeptide pigment-dispersing factor (PDF) is a functional ortholog of vasoactive intestinal polypeptide, the coupling factor of the mammalian circadian pacemaker. Despite of PDF's importance for synchronized circadian locomotor activity rhythms its signaling is not well understood. We studied PDF signaling in primary cell cultures of the accessory medulla, the circadian pacemaker of the Madeira cockroach. In Ca²⁺ imaging studies four types of PDF-responses were distinguished. In regularly bursting type 1 pacemakers PDF application resulted in dose-dependent long-lasting increases in Ca²⁺ baseline concentration and frequency of oscillating Ca²⁺ transients. Adenylyl cyclase antagonists prevented PDF-responses in type 1 cells, indicating that PDF signaled via elevation of intracellular cAMP levels. In contrast, in type 2 pacemakers PDF transiently raised intracellular Ca²⁺ levels even after blocking adenylyl cyclase activity. In patch clamp experiments the previously characterized types 1-4 could not be identified. Instead, PDF-responses were categorized according to ion channels affected. Application of PDF inhibited outward potassium or inward sodium currents, sometimes in the same neuron. In a comparison of Ca²⁺ imaging and patch clamp experiments we hypothesized that in type 1 cells PDF-dependent rises in cAMP concentrations block primarily outward K⁺ currents. Possibly, this PDF-dependent depolarization underlies PDF-dependent phase advances of pacemakers. Finally, we propose that PDF-dependent concomitant modulation of K⁺ and Na⁺ channels in coupled pacemakers causes ultradian membrane potential oscillations as prerequisite to efficient synchronization via resonance.

  1. Total Scattering and Pair Distribution Function Analysis in Modelling Disorder in PZN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitfield, Ross E.; Goossens, Darren J; Welberry, T. R.

    2016-01-01

    The ability of the pair distribution function (PDF) analysis of total scattering (TS) from a powder to determine the local ordering in ferroelectric PZN (PbZn 1/3Nb 2/3O 3) has been explored by comparison with a model established using single-crystal diffuse scattering (SCDS). While X-ray PDF analysis is discussed, the focus is on neutron diffraction results because of the greater extent of the data and the sensitivity of the neutron to oxygen atoms, the behaviour of which is important in PZN. The PDF was shown to be sensitive to many effects not apparent in the average crystal structure, including variations inmore » the B-site—O separation distances and the fact that (110) Pb 2+ displacements are most likely. A qualitative comparison between SCDS and the PDF shows that some features apparent in SCDS were not apparent in the PDF. These tended to pertain to short-range correlations in the structure, rather than to interatomic separations. For example, in SCDS the short-range alternation of the B-site cations was quite apparent in diffuse scattering at (½ ½ ½), whereas it was not apparent in the PDF.« less

  2. Drying-induced atomic structural rearrangements in sodium-based calcium-alumino-silicate-hydrate gel and the mitigating effects of ZrO2 nanoparticles

    NASA Astrophysics Data System (ADS)

    Yang, Kengran; Özçelik, V. Ongun; Garg, Nishant; Gong, Kai; White, Claire E.

    Conventional drying of colloidal materials and gels (including cement) can lead to detrimental effects due to the buildup of internal stresses as water evaporates from the nano/microscopic pores. However, the underlying nanoscopic alterations in these gel materials that are, in part, responsible for macroscopically-measured strain values, especially at low relative humidity, remain a topic of open debate in the literature. In this study, sodium-based calcium-alumino-silicate-hydrate (C-(N)-A-S-H) gel, the major binding phase of silicate-activated blast furnace slag (one type of low-CO$_2$ cement), is investigated from a drying perspective, since it is known to suffer extensively from drying-induced microcracking. By employing in situ synchrotron X-ray total scattering measurements and pair distribution function (PDF) analysis we show that the significant contributing factor to the strain development in this material at extremely low relative humidity (0%) is the local atomic structural rearrangement of the C-(N)-A-S-H gel, including collapse of interlayer spacing and slight disintegration of the gel. Moreover, analysis of the medium range (1.0 - 2.2 nm) ordering in the PDF data reveals that the PDF-derived strain values are in much closer agreement (same order of magnitude) with the macroscopically measured strain data, compared to previous results based on reciprocal space X-ray diffraction data. From a mitigation standpoint, we show that small amounts of ZrO$_2$ nanoparticles are able to actively reinforce the structure of silicate-activated slag during drying, preventing atomic level strains from developing. Mechanistically, these nanoparticles induce growth of a silica-rich gel during drying, which, via density functional theory calculations, we show is attributed to the high surface reactivity of tetragonal ZrO$_2$.

  3. Application of the Fokker-Planck molecular mixing model to turbulent scalar mixing using moment methods

    NASA Astrophysics Data System (ADS)

    Madadi-Kandjani, E.; Fox, R. O.; Passalacqua, A.

    2017-06-01

    An extended quadrature method of moments using the β kernel density function (β -EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope ["Direct numerical simulations of the turbulent mixing of a passive scalar," Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope ["Mapping closures for turbulent mixing and reaction," Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope ["A DNS study of turbulent mixing of two passive scalars," Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed β -PDF model [S. S. Girimaji, "Assumed β-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing," Combust. Sci. Technol. 78, 177 (1991)], the β -EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost.

  4. Bayesian anomaly detection in monitoring data applying relevance vector machine

    NASA Astrophysics Data System (ADS)

    Saito, Tomoo

    2011-04-01

    A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.

  5. Estimation of the Operating Characteristics When the Test Information of the Old Test Is Not Constant. II. Simple Sum Procedure of the Conditional P.D.F. Approach/Normal Approach Method Using Three Subtests of the Old Test, Number 2

    DTIC Science & Technology

    1981-07-01

    Samejima, RR-79-1), suggests that it will be more fruitful to observe the square root of an information function, rather than the information...II44 t&4 ~4J44 AJ.ISN.a -64- 0I 44 0- -J- .00 c;i 0* 0 cIJ II Ys c0 r.M A.LISN30 -65- IV-8 the estimated density functions, g*(r*) , will affect the...Yukihiro NoguchiFaculty of Education Department of Psychology University of Tokyo Elliot Hall Bongo , Bumkyoku 75 East River Road Tokyo, Japan ŕ

  6. Scaling of water vapor in the meso-gamma (2-20km) and lower meso-beta (20-50km) scales from tall tower time series

    NASA Astrophysics Data System (ADS)

    Pressel, K. G.; Collins, W.; Desai, A. R.

    2011-12-01

    Deficiencies in the parameterization of boundary layer clouds in global climate models (GCMs) remains one of the greatest sources of uncertainty in climate change predictions. Many GCM cloud parameterizations, which seek to include some representation of subgrid-scale cloud variability, do so by making assumptions regarding the subgrid-scale spatial probability density function (PDF) of total water content. Properly specifying the form and parameters of the total water PDF is an essential step in the formulation of PDF based cloud parameterizations. In the cloud free boundary layer, the PDF of total water mixing ratio is equivalent to the PDF of water vapor mixing ratio. Understanding the PDF of water vapor mixing ratio in the cloud free atmosphere is a necessary step towards understanding the PDF of water vapor in the cloudy atmosphere. A primary challenge in empirically constraining the PDF of water vapor mixing ratio is a distinct lack of a spatially distributed observational dataset at or near cloud scale. However, at meso-beta (20-50km) and larger scales, there is a wealth of information on the spatial distribution of water vapor contained in the physically retrieved water vapor profiles from the Atmospheric Infrared Sounder onboard NASA`s Aqua satellite. The scaling (scale-invariance) of the observed water vapor field has been suggested as means of using observations at satellite observed (meso-beta) scales to derive information about cloud scale PDFs. However, doing so requires the derivation of a robust climatology of water vapor scaling from in-situ observations across the meso- gamma (2-20km) and meso-beta scales. In this work, we present the results of the scaling of high frequency (10Hz) time series of water vapor mixing ratio as observed from the 447m WLEF tower located near Park Falls, Wisconsin. Observations from a tall tower offer an ideal set of observations with which to investigate scaling at meso-gamma and meso-beta scales requiring only the assumption of Taylor`s Hypothesis to convert observed time scales to spatial scales. Furthermore, the WLEF tower holds an instrument suite offering a diverse set of variables at the 396m, 122m, and 30m levels with which to characterize the state of the boundary layer. Three methods are used to compute scaling exponents for the observed time series; poor man`s variance spectra, first order structure functions, and detrended fluctuation analysis. In each case scaling exponents are computed by linear regression. The results for each method are compared and used to build a climatology of scaling exponents. In particular, the results for June 2007 are presented, and it is shown that the scaling of water vapor time series at the 396m level is characterized by two regimes that are determined by the state of the boundary layer. Finally, the results are compared to, and shown to be roughly consistent with, scaling exponents computed from AIRS observations.

  7. Analyzing coastal environments by means of functional data analysis

    NASA Astrophysics Data System (ADS)

    Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.

    2017-07-01

    Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.

  8. The Circadian Neuropeptide PDF Signals Preferentially through a Specific Adenylate Cyclase Isoform AC3 in M Pacemakers of Drosophila

    PubMed Central

    Duvall, Laura B.; Taghert, Paul H.

    2012-01-01

    The neuropeptide Pigment Dispersing Factor (PDF) is essential for normal circadian function in Drosophila. It synchronizes the phases of M pacemakers, while in E pacemakers it decelerates their cycling and supports their amplitude. The PDF receptor (PDF-R) is present in both M and subsets of E cells. Activation of PDF-R stimulates cAMP increases in vitro and in M cells in vivo. The present study asks: What is the identity of downstream signaling components that are associated with PDF receptor in specific circadian pacemaker neurons? Using live imaging of intact fly brains and transgenic RNAi, we show that adenylate cyclase AC3 underlies PDF signaling in M cells. Genetic disruptions of AC3 specifically disrupt PDF responses: they do not affect other Gs-coupled GPCR signaling in M cells, they can be rescued, and they do not represent developmental alterations. Knockdown of the Drosophila AKAP-like scaffolding protein Nervy also reduces PDF responses. Flies with AC3 alterations show behavioral syndromes consistent with known roles of M pacemakers as mediated by PDF. Surprisingly, disruption of AC3 does not alter PDF responses in E cells—the PDF-R(+) LNd. Within M pacemakers, PDF-R couples preferentially to a single AC, but PDF-R association with a different AC(s) is needed to explain PDF signaling in the E pacemakers. Thus critical pathways of circadian synchronization are mediated by highly specific second messenger components. These findings support a hypothesis that PDF signaling components within target cells are sequestered into “circadian signalosomes,” whose compositions differ between E and M pacemaker cell types. PMID:22679392

  9. The circadian neuropeptide PDF signals preferentially through a specific adenylate cyclase isoform AC3 in M pacemakers of Drosophila.

    PubMed

    Duvall, Laura B; Taghert, Paul H

    2012-01-01

    The neuropeptide Pigment Dispersing Factor (PDF) is essential for normal circadian function in Drosophila. It synchronizes the phases of M pacemakers, while in E pacemakers it decelerates their cycling and supports their amplitude. The PDF receptor (PDF-R) is present in both M and subsets of E cells. Activation of PDF-R stimulates cAMP increases in vitro and in M cells in vivo. The present study asks: What is the identity of downstream signaling components that are associated with PDF receptor in specific circadian pacemaker neurons? Using live imaging of intact fly brains and transgenic RNAi, we show that adenylate cyclase AC3 underlies PDF signaling in M cells. Genetic disruptions of AC3 specifically disrupt PDF responses: they do not affect other Gs-coupled GPCR signaling in M cells, they can be rescued, and they do not represent developmental alterations. Knockdown of the Drosophila AKAP-like scaffolding protein Nervy also reduces PDF responses. Flies with AC3 alterations show behavioral syndromes consistent with known roles of M pacemakers as mediated by PDF. Surprisingly, disruption of AC3 does not alter PDF responses in E cells--the PDF-R(+) LNd. Within M pacemakers, PDF-R couples preferentially to a single AC, but PDF-R association with a different AC(s) is needed to explain PDF signaling in the E pacemakers. Thus critical pathways of circadian synchronization are mediated by highly specific second messenger components. These findings support a hypothesis that PDF signaling components within target cells are sequestered into "circadian signalosomes," whose compositions differ between E and M pacemaker cell types.

  10. Thermodynamics, Kinetics and Structural Evolution of ε-LiVOPO 4 over Multiple Lithium Intercalation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Yuh-Chieh; Wen, Bohua; Wiaderek, Kamila M.

    In this work, we demonstrate the stable cycling of more than one Li in solid-state-synthesized ε-LiVOPO4 over more than 20 cycles for the first time. Using a combination of density functional theory (DFT) calculations, X-ray pair distribution function (PDF) analysis and X-ray absorption near edge structure (XANES) measurements, we present a comprehensive analysis of the thermodynamics, kinetics, and structural evolution of ε-LixVOPO4 over the entire lithiation range. We identify two intermediate phases at x = 1.5 and 1.75 in the low-voltage regime using DFT calculations, and the computed and electrochemical voltage profiles are in excellent agreement. Operando PDF and EXAFSmore » techniques show a reversible hysteretic change in the short (<2 Å) V—O bond lengths coupled with an irreversible extension of the long V—O bond (>2.4 Å) during low-voltage cycling. Hydrogen intercalation from electrolyte decomposition is a possible explanation for the ~2.4 Å V—O bond and its irreversible extension. Finally, we show that ε-LixVOPO4 is likely a pseudo-1D ionic diffuser with low electronic conductivity using DFT calculations, which suggests that nanosizing and carbon coating is necessary to achieve good electrochemical performance in this material.« less

  11. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  12. Cell-size distribution in epithelial tissue formation and homeostasis

    PubMed Central

    Primo, Luca; Celani, Antonio

    2017-01-01

    How cell growth and proliferation are orchestrated in living tissues to achieve a given biological function is a central problem in biology. During development, tissue regeneration and homeostasis, cell proliferation must be coordinated by spatial cues in order for cells to attain the correct size and shape. Biological tissues also feature a notable homogeneity of cell size, which, in specific cases, represents a physiological need. Here, we study the temporal evolution of the cell-size distribution by applying the theory of kinetic fragmentation to tissue development and homeostasis. Our theory predicts self-similar probability density function (PDF) of cell size and explains how division times and redistribution ensure cell size homogeneity across the tissue. Theoretical predictions and numerical simulations of confluent non-homeostatic tissue cultures show that cell size distribution is self-similar. Our experimental data confirm predictions and reveal that, as assumed in the theory, cell division times scale like a power-law of the cell size. We find that in homeostatic conditions there is a stationary distribution with lognormal tails, consistently with our experimental data. Our theoretical predictions and numerical simulations show that the shape of the PDF depends on how the space inherited by apoptotic cells is redistributed and that apoptotic cell rates might also depend on size. PMID:28330988

  13. Cell-size distribution in epithelial tissue formation and homeostasis.

    PubMed

    Puliafito, Alberto; Primo, Luca; Celani, Antonio

    2017-03-01

    How cell growth and proliferation are orchestrated in living tissues to achieve a given biological function is a central problem in biology. During development, tissue regeneration and homeostasis, cell proliferation must be coordinated by spatial cues in order for cells to attain the correct size and shape. Biological tissues also feature a notable homogeneity of cell size, which, in specific cases, represents a physiological need. Here, we study the temporal evolution of the cell-size distribution by applying the theory of kinetic fragmentation to tissue development and homeostasis. Our theory predicts self-similar probability density function (PDF) of cell size and explains how division times and redistribution ensure cell size homogeneity across the tissue. Theoretical predictions and numerical simulations of confluent non-homeostatic tissue cultures show that cell size distribution is self-similar. Our experimental data confirm predictions and reveal that, as assumed in the theory, cell division times scale like a power-law of the cell size. We find that in homeostatic conditions there is a stationary distribution with lognormal tails, consistently with our experimental data. Our theoretical predictions and numerical simulations show that the shape of the PDF depends on how the space inherited by apoptotic cells is redistributed and that apoptotic cell rates might also depend on size. © 2017 The Author(s).

  14. A determination of the charm content of the proton: The NNPDF Collaboration.

    PubMed

    Ball, Richard D; Bertone, Valerio; Bonvini, Marco; Carrazza, Stefano; Forte, Stefano; Guffanti, Alberto; Hartland, Nathan P; Rojo, Juan; Rottoli, Luca

    2016-01-01

    We present an unbiased determination of the charm content of the proton, in which the charm parton distribution function (PDF) is parametrized on the same footing as the light quarks and the gluon in a global PDF analysis. This determination relies on the NLO calculation of deep-inelastic structure functions in the FONLL scheme, generalized to account for massive charm-initiated contributions. When the EMC charm structure function dataset is included, it is well described by the fit, and PDF uncertainties in the fitted charm PDF are significantly reduced. We then find that the fitted charm PDF vanishes within uncertainties at a scale [Formula: see text] GeV for all [Formula: see text], independent of the value of [Formula: see text] used in the coefficient functions. We also find some evidence that the charm PDF at large [Formula: see text] and low scales does not vanish, but rather has an "intrinsic" component, very weakly scale dependent and almost independent of the value of [Formula: see text], carrying less than [Formula: see text] of the total momentum of the proton. The uncertainties in all other PDFs are only slightly increased by the inclusion of fitted charm, while the dependence of these PDFs on [Formula: see text] is reduced. The increased stability with respect to [Formula: see text] persists at high scales and is the main implication of our results for LHC phenomenology. Our results show that if the EMC data are correct, then the usual approach in which charm is perturbatively generated leads to biased results for the charm PDF, though at small x this bias could be reabsorbed if the uncertainty due to the charm mass and missing higher orders were included. We show that LHC data for processes, such as high [Formula: see text] and large rapidity charm pair production and [Formula: see text] production, have the potential to confirm or disprove the implications of the EMC data.

  15. Metabolic inactivation of the circadian transmitter, pigment dispersing factor (PDF), by neprilysin-like peptidases in Drosophila.

    PubMed

    Isaac, R Elwyn; Johnson, Erik C; Audsley, Neil; Shirras, Alan D

    2007-12-01

    Recent studies have firmly established pigment dispersing factor (PDF), a C-terminally amidated octodecapeptide, as a key neurotransmitter regulating rhythmic circadian locomotory behaviours in adult Drosophila melanogaster. The mechanisms by which PDF functions as a circadian peptide transmitter are not fully understood, however; in particular, nothing is known about the role of extracellular peptidases in terminating PDF signalling at synapses. In this study we show that PDF is susceptible to hydrolysis by neprilysin, an endopeptidase that is enriched in synaptic membranes of mammals and insects. Neprilysin cleaves PDF at the internal Ser7-Leu8 peptide bond to generate PDF1-7 and PDF8-18. Neither of these fragments were able to increase intracellular cAMP levels in HEK293 cells cotransfected with the Drosophila PDF receptor cDNA and a firefly luciferase reporter gene, confirming that such cleavage results in PDF inactivation. The Ser7-Leu8 peptide bond was also the principal cleavage site when PDF was incubated with membranes prepared from heads of adult Drosophila. This endopeptidase activity was inhibited by the neprilysin inhibitors phosphoramidon (IC(50,) 0.15 micromol l(-1)) and thiorphan (IC(50,) 1.2 micromol l(-1)). We propose that cleavage by a member of the Drosophila neprilysin family of endopeptidases is the most likely mechanism for inactivating synaptic PDF and that neprilysin might have an important role in regulating PDF signals within circadian neural circuits.

  16. Structural interpretation in composite systems using powder X-ray diffraction: applications of error propagation to the pair distribution function.

    PubMed

    Moore, Michael D; Shi, Zhenqi; Wildfong, Peter L D

    2010-12-01

    To develop a method for drawing statistical inferences from differences between multiple experimental pair distribution function (PDF) transforms of powder X-ray diffraction (PXRD) data. The appropriate treatment of initial PXRD error estimates using traditional error propagation algorithms was tested using Monte Carlo simulations on amorphous ketoconazole. An amorphous felodipine:polyvinyl pyrrolidone:vinyl acetate (PVPva) physical mixture was prepared to define an error threshold. Co-solidified products of felodipine:PVPva and terfenadine:PVPva were prepared using a melt-quench method and subsequently analyzed using PXRD and PDF. Differential scanning calorimetry (DSC) was used as an additional characterization method. The appropriate manipulation of initial PXRD error estimates through the PDF transform were confirmed using the Monte Carlo simulations for amorphous ketoconazole. The felodipine:PVPva physical mixture PDF analysis determined ±3σ to be an appropriate error threshold. Using the PDF and error propagation principles, the felodipine:PVPva co-solidified product was determined to be completely miscible, and the terfenadine:PVPva co-solidified product, although having appearances of an amorphous molecular solid dispersion by DSC, was determined to be phase-separated. Statistically based inferences were successfully drawn from PDF transforms of PXRD patterns obtained from composite systems. The principles applied herein may be universally adapted to many different systems and provide a fundamentally sound basis for drawing structural conclusions from PDF studies.

  17. INTERSTELLAR SONIC AND ALFVENIC MACH NUMBERS AND THE TSALLIS DISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tofflemire, Benjamin M.; Burkhart, Blakesley; Lazarian, A.

    2011-07-20

    In an effort to characterize the Mach numbers of interstellar medium (ISM) magnetohydrodynamic (MHD) turbulence, we study the probability distribution functions (PDFs) of spatial increments of density, velocity, and magnetic field for 14 ideal isothermal MHD simulations at a resolution of 512{sup 3}. In particular, we fit the PDFs using the Tsallis function and study the dependency of the fit parameters on the compressibility and magnetization of the gas. We find that the Tsallis function fits PDFs of MHD turbulence well, with fit parameters showing sensitivities to the sonic and Alfven Mach numbers. For three-dimensional density, column density, and Position-Position-Velocitymore » data, we find that the amplitude and width of the PDFs show a dependency on the sonic Mach number. We also find that the width of the PDF is sensitive to the global Alfvenic Mach number especially in cases where the sonic number is high. These dependencies are also found for mock observational cases, where cloud-like boundary conditions, smoothing, and noise are introduced. The ability of Tsallis statistics to characterize the sonic and Alfvenic Mach numbers of simulated ISM turbulence points to it being a useful tool in the analysis of the observed ISM, especially when used simultaneously with other statistical techniques.« less

  18. Cardiac Amyloidosis Shows Decreased Diastolic Function as Assessed by Echocardiographic Parameterized Diastolic Filling.

    PubMed

    Salman, Katrin; Cain, Peter A; Fitzgerald, Benjamin T; Sundqvist, Martin G; Ugander, Martin

    2017-07-01

    Cardiac amyloidosis is a rare but serious condition with poor survival. One of the early findings by echocardiography is impaired diastolic function, even before the development of cardiac symptoms. Early diagnosis is important, permitting initiation of treatment aimed at improving survival. The parameterized diastolic filling (PDF) formalism entails describing the left ventricular filling pattern during early diastole using the mathematical equation for the motion of a damped harmonic oscillator. We hypothesized that echocardiographic PDF analysis could detect differences in diastolic function between patients with amyloidosis and controls. Pulsed-wave Doppler echocardiography of transmitral flow was measured in 13 patients with amyloid heart disease and 13 age- and gender matched controls. E- waves (2 to 3 per subject) were analyzed using in-house developed software. Nine PDF-derived parameters were obtained in addition to conventional echocardiographic parameters of diastolic function. Compared to controls, cardiac amyloidosis patients had a larger left atrial area (23.7 ± 7.5 cm 2 vs. 18.5 ± 4.8 cm 2 , p = 0.04), greater interventricular septum wall thickness (14.4 ± 2.6 mm vs. 9.3 ± 1.3 mm, p < 0.001), lower e' (0.06 ± 0.02 m/s vs. 0.09 ± 0.02 m/s, p < 0.001) and higher E/e' (18.0 ± 12.9 vs. 7.7 ± 1.3, p = 0.001). The PDF parameter peak resistive force was greater in cardiac amyloidosis patients compared to controls (17.9 ± 5.7 mN vs. 13.1 ± 3.1 mN, p = 0.03), and other PDF parameters did not differ. PDF analysis revealed that patients with cardiac amyloidosis had a greater peak resistive force compared to controls, consistent with a greater degree of diastolic dysfunction. PDF analysis may be useful in characterizing diastolic function in amyloid heart disease. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  19. DShaper: An approach for handling missing low-Q data in pair distribution function analysis of nanostructured systems

    DOE PAGES

    Olds, Daniel; Wang, Hsiu -Wen; Page, Katharine L.

    2015-09-04

    In this work we discuss the potential problems and currently available solutions in modeling powder-diffraction based pair-distribution function (PDF) data from systems where morphological feature information content includes distances in the nanometer length scale, such as finite nanoparticles, nanoporous networks, and nanoscale precipitates in bulk materials. The implications of an experimental finite minimum Q-value are addressed by simulation, which also demonstrates the advantages of combining PDF data with small angle scattering data (SAS). In addition, we introduce a simple Fortran90 code, DShaper, which may be incorporated into PDF data fitting routines in order to approximate the so-called shape-function for anymore » atomistic model.« less

  20. Joint Bayesian Estimation of Quasar Continua and the Lyα Forest Flux Probability Distribution Function

    NASA Astrophysics Data System (ADS)

    Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan

    2017-08-01

    We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.

Top